[go: up one dir, main page]

CN110176020A - A kind of bird's nest impurity method for sorting merging 2D and 3D rendering - Google Patents

A kind of bird's nest impurity method for sorting merging 2D and 3D rendering Download PDF

Info

Publication number
CN110176020A
CN110176020A CN201910282067.1A CN201910282067A CN110176020A CN 110176020 A CN110176020 A CN 110176020A CN 201910282067 A CN201910282067 A CN 201910282067A CN 110176020 A CN110176020 A CN 110176020A
Authority
CN
China
Prior art keywords
image
bird
nest
feather
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282067.1A
Other languages
Chinese (zh)
Inventor
黄琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910282067.1A priority Critical patent/CN110176020A/en
Publication of CN110176020A publication Critical patent/CN110176020A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A23FOODS OR FOODSTUFFS; TREATMENT THEREOF, NOT COVERED BY OTHER CLASSES
    • A23LFOODS, FOODSTUFFS OR NON-ALCOHOLIC BEVERAGES, NOT OTHERWISE PROVIDED FOR; PREPARATION OR TREATMENT THEREOF
    • A23L33/00Modifying nutritive qualities of foods; Dietetic products; Preparation or treatment thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mycology (AREA)
  • Health & Medical Sciences (AREA)
  • Nutrition Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Food Science & Technology (AREA)
  • Polymers & Plastics (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种融合2D和3D图像的燕窝杂质分拣方法,包括下述步骤:S1,燕窝杂质的重构与识别;S1.1,图像预处理;图像预处理,是燕窝识别过程中非常重要的一个环节,图像采集时常受到各种噪声及周边环境的干扰,不仅影响了图像的效果,也使得所需的相关信息往往被淹没其中,给后续的特征提取带来不便;本发明能够提高燕窝羽毛杂质分拣的工作效率,有效的降低燕窝的生产成本;比人工挑拣的燕窝产品精度高,可以进行长时间稳定的燕窝羽毛杂质挑拣工作;引入本发明方法后可以提高燕窝产品的质量,降低燕窝产品的漏检率和误检率,得到可靠、稳定而且准确的检测燕窝产品。

The invention discloses a method for sorting bird's nest impurities by combining 2D and 3D images, comprising the following steps: S1, reconstruction and identification of bird's nest impurities; S1.1, image preprocessing; A very important link, image acquisition is often disturbed by various noises and surrounding environments, which not only affects the effect of the image, but also makes the required relevant information often submerged in it, which brings inconvenience to the subsequent feature extraction; the present invention can Improve the work efficiency of bird's nest feather impurities sorting, effectively reduce the production cost of bird's nest; bird's nest product precision is higher than manual picking, can carry out long-term stable bird's nest feather impurities picking work; after introducing the method of the present invention, the quality of bird's nest products can be improved , reduce the missed detection rate and false detection rate of bird's nest products, and obtain reliable, stable and accurate detection of bird's nest products.

Description

一种融合2D和3D图像的燕窝杂质分拣方法A method for sorting impurities in bird's nest by fusing 2D and 3D images

技术领域technical field

本发明涉及燕窝杂质识别技术领域,具体涉及一种融合2D和3D图像的燕窝杂质分拣方法。The invention relates to the technical field of identification of bird's nest impurities, in particular to a method for sorting bird's nest impurities by fusing 2D and 3D images.

背景技术Background technique

羽毛杂质剔除是燕窝加工产业中的重要工序。目前燕窝羽毛杂质挑拣采取的是人工挑毛方式,人工挑毛存在以下有一下的问题:一方面,人工挑毛方式主要依赖于人的主观判断和经验,很难提供一个可靠、稳定且准确的检测结果;人工没有统一标准,误检率和漏检率高,导致分拣产品参差不齐有损企业利益;另一方面,挑拣燕窝羽毛工作环境及长期人工作业,对工人眼睛、颈椎和身心也有较大伤害。Removing feather impurities is an important process in the bird's nest processing industry. At present, the picking method of bird’s nest feather impurities is manual picking, which has the following problems: On the one hand, the manual picking method mainly depends on people’s subjective judgment and experience, and it is difficult to provide a reliable, stable and accurate The detection results; there is no uniform standard for labor, and the false detection rate and missed detection rate are high, which leads to uneven sorting products and damages the interests of the enterprise; on the other hand, the working environment and long-term manual work of picking bird's nest feathers have a negative There is also greater physical and mental damage.

发明内容Contents of the invention

本发明的目的在于克服现有技术的缺点与不足,提供一种融合2D和3D图像的燕窝杂质分拣方法,该方法基于机器视觉技术进行燕窝杂质识别与定位,可提高工作效率,降低劳动成本,大大增强企业在市场上的竞争力。The purpose of the present invention is to overcome the shortcomings and deficiencies of the prior art, and provide a method for sorting bird’s nest impurities that combines 2D and 3D images. The method is based on machine vision technology to identify and locate bird’s nest impurities, which can improve work efficiency and reduce labor costs. , greatly enhance the competitiveness of enterprises in the market.

本发明的目的通过下述技术方案实现:The object of the present invention is achieved through the following technical solutions:

一种融合2D和3D图像的燕窝杂质分拣方法,包括下述步骤:A kind of bird's nest impurity sorting method that fuses 2D and 3D images, comprises the following steps:

S1,燕窝杂质的重构与识别;S1, reconstruction and identification of impurities in bird's nest;

S1.1,图像预处理;S1.1, image preprocessing;

图像预处理,是燕窝识别过程中非常重要的一个环节,图像采集时常受到各种噪声及周边环境的干扰,不仅影响了图像的效果,也使得所需的相关信息往往被淹没其中,给后续的特征提取带来不便;为滤除噪声干扰、改善图像质量、突出所需相关信息,在对燕窝杂质识别与检测前需做相关的图像预处理;Image preprocessing is a very important link in the process of bird's nest identification. Image acquisition is often disturbed by various noises and surrounding environments, which not only affects the image effect, but also makes the required relevant information often submerged in it, giving subsequent Feature extraction brings inconvenience; in order to filter out noise interference, improve image quality, and highlight the required relevant information, relevant image preprocessing is required before identifying and detecting bird's nest impurities;

S1.1.1,图像滤波;S1.1.1, image filtering;

在燕窝图像传输和处理过程中,常受各种噪声污染,出现图像的暗点或亮点干扰,在降低图像质量的同时,还影响图像处理中特征提取的准确度;因此,选取有效的图像滤波算法解决噪声带来的影响,常用的图像滤波算法有:频域滤波法、空间域中的均值滤波法和中值滤波法;In the process of bird’s nest image transmission and processing, it is often polluted by various noises, and dark or bright spots of the image interfere, which not only reduces the image quality, but also affects the accuracy of feature extraction in image processing; therefore, it is necessary to select an effective image filter Algorithms to solve the impact of noise, commonly used image filtering algorithms are: frequency domain filtering method, mean filtering method and median filtering method in the spatial domain;

所述中值滤波算法是一种邻域运算,计算过程是对图像进行中值滤波处理时,将模板上的对应值按由小到大顺序排列,然后将这列数据的中间值赋给模板中心位置的像素点;其中,若模板有奇数个点,将按大小顺序排列后的中间像素点的灰度值作为中间值;若模板有偶数个点,把按大小顺序排列后的灰度值位于中间的两个值的平均值作为中间值;The median filtering algorithm is a neighborhood operation, and the calculation process is to arrange the corresponding values on the template in ascending order when performing median filtering on the image, and then assign the median value of this column of data to the template The pixel at the center position; among them, if the template has an odd number of points, the gray value of the middle pixel arranged in order of size is used as the middle value; if the template has an even number of points, the gray value of the middle pixel arranged in order of size The average of the two values in the middle is taken as the median;

由于中值滤波效果依赖于滤波窗口的大小,太大会使边缘模糊,太小则去噪效果不理想,则对中值滤波算法进行改进:逐行扫描图像,在处理每一个像素时判断该像素是否是滤波窗口所覆盖下邻域像素的极大值或极小值;若是,就采用正常中值滤波算法处理该像素;若不是,则不予理之;Since the median filtering effect depends on the size of the filtering window, if it is too large, the edge will be blurred, and if it is too small, the denoising effect will be unsatisfactory, so the median filtering algorithm is improved: scan the image line by line, and judge the pixel when processing each pixel Whether it is the maximum or minimum value of the neighboring pixels covered by the filter window; if so, use the normal median filtering algorithm to process the pixel; if not, ignore it;

采用改进的3×3中值滤波算法进行图像滤波;The improved 3×3 median filtering algorithm is used for image filtering;

S1.1.2,图像增强;S1.1.2, image enhancement;

利用分段线性变换函数增强图像对比度,实际上是增强原图像各部分间反差,即增强输入图像中感兴趣的灰度区域,相对抑制那些不感兴趣的灰度区域;Using the piecewise linear transformation function to enhance the image contrast is actually to enhance the contrast between the parts of the original image, that is, to enhance the gray-scale areas of interest in the input image, and relatively suppress those gray-scale areas that are not of interest;

所述分段线性变换函数的形式如下所示:The form of the piecewise linear transformation function is as follows:

其中,(x1,x2)和(y1,y2)是上式(3.1)中的主要参数,根据算法函数描述,可知x1和x2是限定处理对象需要转换的灰度级范围,而y1和y2则决定了线性变换的斜率;Among them, (x 1 , x 2 ) and (y 1 , y 2 ) are the main parameters in the above formula (3.1). According to the description of the algorithm function, it can be seen that x 1 and x 2 are the gray scale ranges to be converted to limit the processing object , while y 1 and y 2 determine the slope of the linear transformation;

S1.2,图像分割;S1.2, image segmentation;

图像分割采用阈值分割法,通过选取最佳阈值将背景和物体进行分离,以进行图像分割;通过设定一个合理阈值来进行图像判断,将符合设定阈值范围的那些部分的灰度值设为0,反之设为1,从而将感兴趣目标从图像中分离出来,并生成二值图像;Image segmentation adopts the threshold segmentation method, which separates the background and the object by selecting the optimal threshold for image segmentation; by setting a reasonable threshold for image judgment, the gray value of those parts that meet the set threshold range is set to 0, otherwise set to 1 to separate the target of interest from the image and generate a binary image;

阈值分割是将图像的输入f变换为输出g,其变换如下:Threshold segmentation is to transform the input f of the image into the output g, and the transformation is as follows:

上式中,T为设定阈值,g(i,j)=0表示背景部分的图像元素,g(i,j)=1表示目标物体部分的图像元素;阈值分割即,对图像f扫描其所有像素,若f(i,j)≥T,则分割后图像的元素g(i,j),就是物体的像素;反之就是背景像素;In the above formula, T is the set threshold, g(i,j)=0 represents the image element of the background part, and g(i,j)=1 represents the image element of the target object part; threshold segmentation is to scan the image f for its For all pixels, if f(i,j)≥T, the element g(i,j) of the segmented image is the pixel of the object; otherwise, it is the background pixel;

S1.3,特征选择和羽毛杂质区域提取;S1.3, feature selection and feather impurity region extraction;

经过图像分割得到各种感兴趣的区域之后,可以利用简单区域描绘子作为代表该区域的特征,并将这些区域特征组合成特征向量以供分类使用;After the various regions of interest are obtained through image segmentation, simple region descriptors can be used as the features representing the region, and these regional features can be combined into feature vectors for classification;

其中,所述简单区域描绘子为周长、面积、致密性、区域的质心、灰度均值、灰度中值、包含区域的最小矩形、最小或最大灰度级、大于或小于均值的像素数及欧拉数;Wherein, the simple region descriptor is perimeter, area, compactness, centroid of the region, gray mean, gray median, minimum rectangle containing the region, minimum or maximum gray level, number of pixels greater or less than the mean and Euler's number;

S1.4,燕窝羽毛杂质区域定位;S1.4, positioning of bird's nest feather impurity area;

对识别出来的羽毛杂质定位时,需要对其分类与修正;When locating the identified feather impurities, it needs to be classified and corrected;

S1.4.1,羽毛杂质区域分类;S1.4.1, Regional Classification of Feather Trash;

利用欧式距离判断各羽毛杂质区域质心到自身区域最小距离点间值d,可将各羽毛杂质区域分为两类;通过质心公式(4.4)、(4.5)求出羽毛杂质区域质心坐标为(R,C),则欧式距离公式如下:Using the Euclidean distance to judge the value d between the centroid of each feather impurity area and the minimum distance point of its own area, each feather impurity area can be divided into two categories; the centroid coordinates of the feather impurity area are calculated by the centroid formula (4.4) and (4.5) as (R , C), then the Euclidean distance formula is as follows:

若d=0,则该羽毛区域属于第一类,即质心落在羽毛区域内;若d≠0,则该羽毛区域属于第二类,即质心落在羽毛区域外;If d=0, the feather region belongs to the first category, that is, the centroid falls within the feather region; if d≠0, the feather region belongs to the second category, that is, the centroid falls outside the feather region;

其中,第一类质心落在羽毛区域正是所求,而第二类羽毛区域则需进行修正,将落在第二类羽毛区域外质心通过修正算法,获取新的落在羽毛区域内的点(ic,jc);Among them, the centroid of the first type falls in the feather area is exactly what is required, while the second type of feather area needs to be corrected, and the centroid falling outside the feather area of the second type will be obtained through the correction algorithm to obtain a new point falling in the feather area (i c , j c );

S1.4.2,质心修正;S1.4.2, centroid correction;

为了修正羽毛区域外质心,引入最小外接椭圆长半轴的直线延长算法;In order to correct the centroid outside the feather area, a linear extension algorithm of the major semi-axis of the smallest circumscribed ellipse is introduced;

S1.5,3D燕窝杂质重建;S1.5, 3D reconstruction of bird's nest impurities;

面阵相机2D图像与深度相机3D图像的立体配准是利用空间几何坐标转换关系找到两幅图像之间的像素点坐标的对应关系;首先分别对面阵相机的2D燕窝图像和3D相机的燕窝图像提取Mark区域及Mark区域的质心坐标;然后再对面阵相机的两幅或多幅图像进行羽毛杂质特征提取,得到燕窝羽毛杂质区域的质心坐标;然后根据Mark点换算公式求出匹配的3D燕窝图像中的燕窝羽毛杂质特征点区域;最后完成图像的匹配,生成3D模型;The stereoscopic registration of the 2D image of the area array camera and the 3D image of the depth camera is to use the spatial geometric coordinate transformation relationship to find the corresponding relationship between the pixel coordinates of the two images; Extract the Mark area and the centroid coordinates of the Mark area; then perform feather impurity feature extraction on two or more images of the area array camera to obtain the centroid coordinates of the bird's nest feather impurity area; then calculate the matching 3D bird's nest image according to the Mark point conversion formula The bird's nest feather impurity feature point area in the bird's nest; finally complete the image matching and generate a 3D model;

S1.5.1,燕窝羽毛杂质点云数据的获取及燕窝杂质模型重建;S1.5.1, Acquisition of bird's nest feather impurity point cloud data and bird's nest impurity model reconstruction;

根据生成的3D燕窝杂质区域,对3D燕窝原图像通过相应的分割,生成指定燕窝羽毛杂质区域图像,然后将其3D燕窝羽毛杂质区域图像分解为包含三维坐标的X、Y、Z坐标点信息的图像,并将三维点X、Y、Z图像转化为3D燕窝羽毛杂质点云,得到离散的燕窝杂质表面的三维特征点;为了重建燕窝杂质表面,还要对其进行三角化,最终重构出燕窝羽毛杂质的表面;According to the generated 3D bird's nest impurity area, the original 3D bird's nest image is segmented accordingly to generate a designated bird's nest feather impurity area image, and then the 3D bird's nest feather impurity area image is decomposed into X, Y, Z coordinate point information containing three-dimensional coordinates image, and convert the three-dimensional point X, Y, Z images into 3D bird’s nest feather impurity point clouds, and obtain the discrete three-dimensional feature points on the bird’s nest impurity surface; The surface of bird's nest feather impurities;

S1.5.2,燕窝羽毛杂质特征识别;S1.5.2, Feature identification of bird's nest feather impurities;

利用Mark点公式求得两幅面阵相机燕窝图像和3D相机燕窝图像之间的对应关系,求得的所有的燕窝杂质区域的点在3D燕窝图像中的三维表示;Use the Mark point formula to obtain the corresponding relationship between the bird's nest images of the two area array cameras and the bird's nest image of the 3D camera, and obtain the three-dimensional representation of the points of all the bird's nest impurity regions in the 3D bird's nest image;

根据生成的3D燕窝杂质区域,将3D燕窝原图像减少到指定燕窝羽毛杂质区域图像,然后再将3D燕窝羽毛杂质区域图像分解为包含3D点的x,y,z坐标图像,其中z图像为高度图像;将z图像作为处理对象,运用质心公式求出3D燕窝羽毛杂质坐标(x,y),则三维图像中每一个二维坐标点都对应一个固定高度值Z;由于z图像的灰度值就是其图像的高度值Z,先求出以z图像中质心坐标为圆心,分别取以燕窝羽毛杂质区域最小圆环,即最小圆环+30个像素点和最小圆环+60个像素点的圆差值间的灰度值均值Mean和偏差Deviation,可用以下公式描述:According to the generated 3D bird's nest impurity area, reduce the original 3D bird's nest image to the designated bird's nest feather impurity area image, and then decompose the 3D bird's nest feather impurity area image into an x, y, z coordinate image containing 3D points, where the z image is the height image; take the z image as the processing object, use the centroid formula to find the 3D bird's nest feather impurity coordinates (x, y), then each two-dimensional coordinate point in the three-dimensional image corresponds to a fixed height value Z; because the gray value of the z image It is the height value Z of the image. First, find the coordinates of the center of mass in the z image as the center of the circle, and respectively take the smallest ring in the bird's nest feather impurity area, that is, the smallest ring + 30 pixels and the smallest ring + 60 pixels. The gray value mean and deviation Deviation between circular differences can be described by the following formula:

其中,高度Z为:Among them, the height Z is:

Z=Mean-Mean1 (3.6)Z=Mean-Mean 1 (3.6)

从而可以从3D相机中得到各羽毛区域的三维坐标(Row3m,Colm3m,Z);Thereby, the three-dimensional coordinates (Row3m, Colm3m, Z) of each feather area can be obtained from the 3D camera;

S2,实验;S2, experiment;

S2.1,相机标定;S2.1, camera calibration;

S2.1.1,畸变校正;S2.1.1, distortion correction;

通常视觉系统中使用的相机镜头都存在不同程度畸变,像素离中心的远近会影响图像畸变程度,距离图像中心越近,畸变就越小,这种畸变属于非线性类型,可用如下公式描述:Usually, the camera lens used in the vision system has different degrees of distortion. The distance between the pixel and the center will affect the degree of image distortion. The closer to the image center, the smaller the distortion. This kind of distortion belongs to the nonlinear type, which can be described by the following formula:

上式中,代表无畸变的符合线性成像模型的像素点理想坐标,(x,y)代表实际图像点坐标,δx和δy是非线性畸变值,它与图像点在图像中位置相关,可用如下公式表示:In the above formula, Represents the ideal coordinates of a pixel point that conforms to the linear imaging model without distortion, (x, y) represents the coordinates of the actual image point, δ x and δ y are nonlinear distortion values, which are related to the position of the image point in the image, and can be expressed by the following formula:

其中,δx或者δy的第一项为Radial distortion径向畸变,第二项为Centrifugaldistortion离心畸变,第三项为Thin prism薄棱镜畸变,公式中的系数称为非线性畸变参数;而非线性畸变参数引入过多时会引起解的不稳定,影响精度提高;因此,可将公式(4.2)化简为:Among them, the first item of δ x or δ y is Radial distortion, the second item is Centrifugal distortion, and the third item is Thin prism distortion. The coefficients in the formula are called nonlinear distortion parameters; When too many distortion parameters are introduced, the solution will be unstable and affect the accuracy improvement; therefore, the formula (4.2) can be simplified as:

由此可明显地看出,畸变随着径向半径增大而增大,即离图像中心远的部分畸变较严重;It can be clearly seen that the distortion increases with the increase of the radial radius, that is, the part far from the image center is more severely distorted;

S2.1.2,Mark点选取;S2.1.2, Mark point selection;

通过Mark点,将二维图像中检测物体坐标转换到三维图像中,判断转换后的三维坐标是否是检测物体所在位置来实现相机标定目标;Convert the coordinates of the detected object in the two-dimensional image to the three-dimensional image through the Mark point, and judge whether the converted three-dimensional coordinates are the position of the detected object to achieve the camera calibration target;

使用面阵相机和3D相机相结合的方式进行图像采集,Mark点选取具有一定高度,颜色均匀、形状规则的圆或者矩形;Use the combination of area array camera and 3D camera for image acquisition, and select a circle or rectangle with a certain height, uniform color and regular shape as the Mark point;

Mark点和检测物体存在灰度及形状差异明显的特点,先用灰度阈值方法将Mark点所在区域分割出来,再用特征提取方法将Mark点识别出来;The Mark point and the detected object have the characteristics of obvious differences in grayscale and shape. First, use the grayscale threshold method to segment the area where the Mark point is located, and then use the feature extraction method to identify the Mark point;

S2.1.3,Mark点质心坐标计算;S2.1.3, Mark point centroid coordinate calculation;

利用上述Mark点识别方法分别识别出二维图像和三维图像中的Mark点,然后计算各自的Mark点质心坐标;对一幅2D离散化数字图像,f(x,y)≥0,p+q阶矩Mpq和中心矩μpq定义为:Use the above-mentioned Mark point identification method to identify the Mark points in the two-dimensional image and the three-dimensional image respectively, and then calculate the centroid coordinates of the respective Mark points; for a 2D discretized digital image, f(x,y)≥0, p+q The order moment M pq and central moment μ pq are defined as:

上式中,(ic,jc)为质心坐标,且In the above formula, (i c , j c ) are the coordinates of the center of mass, and

因此,由上述公式求出二维图像和三维图像的Mark点质心坐标,分别为(i1,j1)和(i2,j2);Therefore, the centroid coordinates of the Mark point of the two-dimensional image and the three-dimensional image are obtained from the above formula, which are (i 1 , j 1 ) and (i 2 , j 2 );

S2.1.4,图像坐标转换Mark点换算公式;S2.1.4, image coordinate transformation Mark point conversion formula;

根据Mark点坐标,经过坐标转换,二维图像中检测物体的坐标转换到三维图像中,若经转换得到的三维图像的坐标点所在位置就是检测物体位置,则平面相机和3D相机间标定完成;According to the coordinates of the Mark point, after coordinate conversion, the coordinates of the detected object in the two-dimensional image are converted into the three-dimensional image. If the position of the coordinate point of the converted three-dimensional image is the position of the detected object, the calibration between the plane camera and the 3D camera is completed;

S2.2,实验结果;S2.2, Experimental results;

S2.2.1,Mark点质心坐标;S2.2.1, Mark point centroid coordinates;

运用质心公式计算二维图像Mark点和三维图像Mark点质心;Use the centroid formula to calculate the centroid of the two-dimensional image Mark point and the three-dimensional image Mark point;

S2.2.2,燕窝杂质区域质心坐标;S2.2.2, the centroid coordinates of the bird's nest impurity area;

创建两个空数组R和C,分别用来存放羽毛区域质心的行坐标和列坐标;在2D燕窝图中检测出20个燕窝羽毛杂质区域,其区域质心坐标为(R,C);Create two empty arrays R and C, which are used to store the row coordinates and column coordinates of the centroid of the feather area respectively; 20 bird's nest feather impurity areas are detected in the 2D bird's nest image, and the centroid coordinates of the area are (R, C);

S2.2.3,燕窝杂质区域坐标;S2.2.3, coordinates of bird's nest impurity area;

创建两个空数组Rows和Columns,分别用来存放羽毛区域质心的行坐标和列坐标;在2D燕窝图中检测出20个燕窝羽毛杂质区域,其区域坐标为(Rows,Columns);Create two empty arrays Rows and Columns, which are used to store the row coordinates and column coordinates of the centroid of the feather area respectively; 20 bird's nest feather impurity areas are detected in the 2D bird's nest image, and the area coordinates are (Rows, Columns);

其中3D燕窝羽毛杂质区域坐标可以根据Mark点换算公式计算得出,创建两个空数组Rows3m和Columns3m,分别用来存放3D燕窝图中的羽毛区域质心的行坐标和列坐标;Among them, the coordinates of the 3D bird's nest feather impurity area can be calculated according to the Mark point conversion formula, and two empty arrays Rows3m and Columns3m are created to store the row coordinates and column coordinates of the centroid of the feather area in the 3D bird's nest image;

S2.2.4,燕窝杂质3D模型;S2.2.4, 3D model of bird's nest impurities;

利用已经获取的燕窝羽毛杂质区域3D坐标(Row3m,Colm3m,Z),生成燕窝杂质3D模型图;Use the acquired 3D coordinates (Row3m, Colm3m, Z) of the bird's nest feather impurity area to generate a 3D model map of the bird's nest impurity;

可以看出燕窝杂质的具体位置及面积大小,方便燕窝杂质挑拣;但是这只是燕窝表层的杂质,燕窝内部没有裸露在表面的羽毛杂质并没有检测出来;所以可以将已经分拣了表层的燕窝重复利用面阵相机和3D相机结合的方法进行燕窝内部杂质图像采集,然后对图像进行上述同样的方法处理,将燕窝羽毛杂质区域分拣出来。You can see the specific location and size of the bird’s nest impurities, which is convenient for picking out the bird’s nest impurities; but this is only the impurities on the surface of the bird’s nest, and the feather impurities that are not exposed on the surface of the bird’s nest are not detected; so the bird’s nest that has been sorted on the surface can be repeated The method of combining the area array camera and the 3D camera is used to collect the image of the impurities inside the bird's nest, and then the image is processed by the same method as above to sort out the impurities in the bird's nest feathers.

优选地,所述步骤S1.4.2中最小外接椭圆长半轴的直线延长算法过程具体如下:Preferably, in the step S1.4.2, the linear extension algorithm process of the major semiaxis of the smallest circumscribed ellipse is as follows:

Step1:求取属于第二类羽毛区域最小外接椭圆,从而获得各个区域最小外接椭圆长半轴a;Step1: Obtain the minimum circumscribed ellipse of the feather area belonging to the second category, so as to obtain the major semiaxis a of the minimum circumscribed ellipse of each area;

Step2:以区域外质心(R,C)为起点,质心到区域最小距离点(i1,j1)为终点,连接两点得到线段L;Step2: Starting from the center of mass (R, C) outside the area, and ending at the minimum distance point (i 1 , j 1 ) from the center of mass to the area, connect the two points to obtain a line segment L;

Step3:以质心到区域最小距离点(i1,j1)为起点,作直线L延长线,延长长度为a得到新直线M;Step3: Take the center of mass to the minimum distance point (i 1 , j 1 ) as the starting point, draw the extension line of straight line L, and the extension length is a to obtain a new straight line M;

Step4:求取新直线M与各羽毛区域交点(m,n),计算点(m,n)和质心到区域最小距离点(i1,j1)的中心点(p,q),该点即为修正后所求点。Step4: Calculate the intersection point (m, n) of the new straight line M and each feather area, calculate the point (m, n) and the center point (p, q) of the centroid to the minimum distance point (i 1 , j 1 ) of the area, the point It is the point after correction.

优选地,所述步骤S2.1.4中标定过程具体如下:Preferably, the calibration process in the step S2.1.4 is as follows:

Step1:找出2D图像Mark点坐标(i1,j1)与2D图像杂质区域坐标(Rows,Columns);Step1: Find out the coordinates of the 2D image Mark point (i 1 , j 1 ) and the coordinates of the 2D image impurity area (Rows, Columns);

Step2:求出2D图像中Mark点坐标与2D燕窝杂质区域坐标的行距与列距:Step2: Calculate the line and column distances between the Mark point coordinates in the 2D image and the coordinates of the 2D bird's nest impurity area:

Row=Rows-i1Row=Rows-i1

Col=Columns-j1, (4.6)Col=Columns-j1, (4.6)

Step3:计算2D图像与3D图像之间的图像长之比与宽之比:Step3: Calculate the image length ratio and width ratio between the 2D image and the 3D image:

(L2D为2D图像长,L3D为3D图像长) (L 2D is the length of 2D image, L 3D is the length of 3D image)

Step4:利用Mark点计算出3D燕窝杂质区域行坐标和列坐标:Step4: Use the Mark points to calculate the row and column coordinates of the 3D bird's nest impurity area:

Row3m=KL*i2+RowRow3m=KL*i2+Row

Colm3m=KW*j2+Col, (4.8)Colm3m=KW*j2+Col, (4.8)

Steps5:得到的Row3m,Colm3m就是三维图像中的各羽毛区域二维坐标;Steps5: The obtained Row3m and Colm3m are the two-dimensional coordinates of each feather area in the three-dimensional image;

Steps6:三维图像中每一个二维坐标点都对应一个固定高度值z,就可以从3D相机中得到各羽毛区域的三维坐标(Row3m,Colm3m,Z);Steps6: Each two-dimensional coordinate point in the three-dimensional image corresponds to a fixed height value z, and the three-dimensional coordinates (Row3m, Colm3m, Z) of each feather area can be obtained from the 3D camera;

在此,设定一个允许最大偏差σ=0.5mm,若求出三维坐标(Row3m,Colm3m,Z)在允许最大偏差内,则相机标定完成;否则,重新对相机镜头进行畸变校正,然后返回Step1重新开始计算,且由于允许最大偏差的精度为0.1mm,因此,为降低计算误差,保留重新计算结果的精度为0.01mm。Here, set an allowable maximum deviation σ=0.5mm, if the three-dimensional coordinates (Row3m, Colm3m, Z) are found to be within the allowable maximum deviation, the camera calibration is completed; otherwise, re-correct the distortion of the camera lens, and then return to Step1 Restart the calculation, and since the accuracy of the allowable maximum deviation is 0.1mm, in order to reduce the calculation error, the accuracy of the recalculation result is retained at 0.01mm.

本发明与现有技术相比具有以下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明能够提高燕窝羽毛杂质分拣的工作效率,有效的降低燕窝的生产成本;比人工挑拣的燕窝产品精度高,可以进行长时间稳定的燕窝羽毛杂质挑拣工作;引入本发明方法后可以提高燕窝产品的质量,降低燕窝产品的漏检率和误检率,得到可靠、稳定而且准确的检测燕窝产品;与此同时,降低工人的劳动强度,减少工人眼睛、颈椎和身心的损坏;还可提高工作效率,降低劳动成本,大大增强企业在市场上的竞争力。The invention can improve the work efficiency of bird’s nest feather impurities sorting, and effectively reduce the production cost of bird’s nest; it has higher precision than manual picking of bird’s nest products, and can carry out long-term and stable bird’s nest feather and impurity sorting work; after introducing the method of the invention, the bird’s nest The quality of the product can reduce the missed detection rate and false detection rate of bird’s nest products, and obtain reliable, stable and accurate detection of bird’s nest products; at the same time, it can reduce the labor intensity of workers and reduce the damage to workers’ eyes, cervical spine and physical and mental health; it can also improve Improve work efficiency, reduce labor costs, and greatly enhance the competitiveness of enterprises in the market.

附图说明Description of drawings

图1为本发明燕窝羽毛杂质识别与定位流程图;Fig. 1 is bird's nest feather impurity identification and location flowchart of the present invention;

图2为本发明的2D燕窝原图像;Fig. 2 is the original image of 2D bird's nest of the present invention;

图3为本发明的3D燕窝原图像;Fig. 3 is the original image of 3D bird's nest of the present invention;

图4为本发明的中值滤波算法示意图;Fig. 4 is a schematic diagram of the median filtering algorithm of the present invention;

图5为本发明的灰度分段线性变换示意图;Fig. 5 is a schematic diagram of the gray scale segmented linear transformation of the present invention;

图6为本发明阈值分割背景后的2D燕窝图;Fig. 6 is the 2D bird's nest figure after the threshold segmentation background of the present invention;

图7为本发明改进后的中值滤波燕窝图;Fig. 7 is the improved median filtering bird's nest figure of the present invention;

图8为本发明的燕窝杂质区域示意图;Fig. 8 is the schematic diagram of bird's nest impurity region of the present invention;

图9为本发明的2D燕窝羽毛杂质质心点示意图;Fig. 9 is the schematic diagram of centroid point of 2D bird's nest feather impurity of the present invention;

图10为本发明的3D燕窝杂质区域示意图;Fig. 10 is a schematic diagram of the 3D bird's nest impurity area of the present invention;

图11为本发明的三维燕窝杂质深度图像处理流程图;Fig. 11 is the three-dimensional bird's nest impurity depth image processing flowchart of the present invention;

图12为本发明的燕窝杂质质心三维高度值示意图;Fig. 12 is a schematic diagram of the three-dimensional height value of the bird's nest impurity centroid of the present invention;

图13为本发明的2D燕窝Mark点区域示意图;Figure 13 is a schematic diagram of the 2D Bird's Nest Mark point area of the present invention;

图14为本发明的3D燕窝Mark点区域示意图;Figure 14 is a schematic diagram of the 3D Bird's Nest Mark point area of the present invention;

图15为本发明的二维Mark点质心示意图;Fig. 15 is a schematic diagram of the two-dimensional Mark point centroid of the present invention;

图16为本发明的三维Mark点质心示意图;Fig. 16 is a schematic diagram of the center of mass of the three-dimensional Mark point of the present invention;

图17为本发明生成的燕窝杂质3D模型图。Fig. 17 is a 3D model diagram of bird's nest impurities generated by the present invention.

具体实施方式Detailed ways

下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。The present invention will be further described in detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

一、本发明一种融合2D和3D图像的燕窝杂质分拣方法针对燕窝羽毛杂质的分拣问题,以燕窝为检测对象,提出了面阵相机和3D相机组合的工作方式,利用面阵相机的优势进行燕窝羽毛区域杂质分拣,再通过3D相机获取3D图像中燕窝杂质图高度值,形成杂质区域的三维信息,然后结合得到的燕窝羽毛杂质三维信息生成燕窝羽毛杂质区域的三维模型,方便进燕窝杂质的挑拣。One, a kind of bird's nest impurity sorting method that merges 2D and 3D images of the present invention is aimed at the sorting problem of bird's nest feather impurity, takes bird's nest as detection object, proposes the working mode of the combination of area array camera and 3D camera, utilizes the area array camera The advantage is to sort out the impurities in the bird's nest feather area, and then use the 3D camera to obtain the height value of the bird's nest impurity map in the 3D image to form the three-dimensional information of the impurity area, and then combine the obtained three-dimensional information of the bird's nest feather impurities to generate a three-dimensional model of the bird's nest feather impurity area, which is convenient for processing Picking of impurities in bird's nest.

如图1~17所示,具体来说:As shown in Figures 1 to 17, specifically:

二、2D图像与3D图像的融合方法。2. Fusion method of 2D image and 3D image.

由于燕窝及其羽毛杂质均是形状不规则、高低不平的,需要利用三维坐标对其杂质位置进行描述。根据燕窝及其羽毛杂质的形状和灰度特性,经过反复实验证明,提出了面阵相机和3D相机组合的工作方式,利用面阵相机的优势进行燕窝羽毛区域杂质分拣,再通过3D相机获取3D图像中燕窝杂质图高度值,形成杂质区域的三维信息。如图1所示,为发明的燕窝羽毛杂质处理流程图。Since the bird's nest and its feather impurities are irregular in shape and uneven, it is necessary to use three-dimensional coordinates to describe the position of the impurities. According to the shape and grayscale characteristics of the bird's nest and its feather impurities, after repeated experiments, a working method of combining an area array camera and a 3D camera is proposed, and the advantages of the area array camera are used to sort the impurities in the bird's nest feather area, and then obtained by the 3D camera The height value of the bird's nest impurity map in the 3D image forms the three-dimensional information of the impurity area. As shown in Figure 1, it is the flow chart of processing the bird's nest feather impurities of the invention.

本发明以燕窝为检测对象,基于机器视觉技术对燕窝羽毛杂质进行识别与定位,主要完成如下工作:The present invention takes bird's nest as the detection object, and identifies and locates bird's nest feather impurities based on machine vision technology, and mainly completes the following tasks:

(1)针对燕窝及其羽毛杂质均是形状不规则、高低不平的,需要利用三维坐标对其杂质位置进行描述的问题,本发明提出面阵相机和3D相机组合的工作方式,利用面阵相机进行燕窝羽毛区域杂质识别,然后通过3D相机获取图像高度值,得到杂质区域三维信息;(1) Aiming at the problem that the bird’s nest and its feather impurities are irregular in shape and uneven, it is necessary to use three-dimensional coordinates to describe the position of the impurities. The present invention proposes a working method combining an area array camera and a 3D camera. Identify the impurities in the bird's nest feather area, and then obtain the image height value through the 3D camera to obtain the three-dimensional information of the impurity area;

(2)针对羽毛杂质识别,对采集图像预处理后,提出基于灰度级的二次迭代选择阈值法将羽毛杂质区域凸显出来,然后通过形状特征提取进一步精确识别羽毛杂质。实验结果表明其识别精度高、漏检率低,远胜于人工检测方式,满足本发明设计要求;(2) For the identification of feather impurities, after preprocessing the collected images, a gray level-based secondary iterative selection threshold method is proposed to highlight the feather impurity area, and then the feather impurities are further accurately identified through shape feature extraction. Experimental results show that its recognition accuracy is high, and the missed detection rate is low, which is far better than the manual detection method, and meets the design requirements of the present invention;

(3)针对杂质区域定位检测,通过欧式距离方式对识别的羽毛区域进行分类,提出区域的最小外接椭圆长半轴的直线延长法来修正落在羽毛区域外的区域质心,并提出基于Mark点定位匹配方法,结合3D相机可以获取图像中每一点高度值,得到羽毛杂质区域三维信息。该方法计算简单,效率高,定位误差在0.5mm精度内;(3) For the positioning and detection of the impurity area, the identified feather area is classified by the Euclidean distance method, and the linear extension method of the minimum circumscribed ellipse major axis of the area is proposed to correct the centroid of the area falling outside the feather area, and a Mark point-based The positioning matching method, combined with the 3D camera, can obtain the height value of each point in the image, and obtain the three-dimensional information of the feather impurity area. This method is simple in calculation, high in efficiency, and the positioning error is within 0.5mm;

(4)最后,对本发明所提各算法进行实验与分析。实验结果表明,本发明所提方法检测精度高,漏检率低,误检率低于4%,时间耗费远少于人工检测方式,且不受人为因素等干扰,各方面性能均达到预期效果以及实际的生产加工要求。(4) Finally, experiment and analyze the algorithms proposed in the present invention. Experimental results show that the method proposed in the present invention has high detection accuracy, low missed detection rate, false detection rate is lower than 4%, time consumption is far less than manual detection method, and is not interfered by human factors, etc., and all aspects of performance have reached the expected effect And the actual production and processing requirements.

三、燕窝杂质重构与识别方法。3. Reconstruction and identification method of impurities in bird's nest.

本发明首先进行图像预处理,从图2和图3中可以看出燕窝原图像背景较为复杂,所以要先去除图像背景,通过中值滤波滤除图像噪声,利用燕窝与其羽毛杂质灰度差异,使用线性分段变换增强燕窝区域与背景对比度,进而提出基于灰度级二次迭代选择自动阈值法将背景分割开来;从去除背景后的图像中提取Mark点区域,然后求出Mark点质心坐标;由于初步分割出来的羽毛杂质区域混杂少量误检区域,接着,提出羽毛杂质特征选取和特征匹配方法剔除误检区域并将羽毛杂质区域再次分拣出来;针对羽毛杂质定位,本发明提出Mark点定位方法将二维图像中羽毛杂质区域坐标转换到三维图像中,利用3D相机可以获取图像中每一点高度值,得到羽毛杂质区域三维信息,最后利用羽毛杂质区域三维坐标。The present invention first carries out image preprocessing, as can be seen from Fig. 2 and Fig. 3, the background of the original image of bird's nest is more complicated, so first remove the image background, filter out the image noise by median filtering, utilize the difference in gray scale of bird's nest and its feather impurities, Use linear segment transformation to enhance the contrast between bird's nest area and background, and then propose an automatic threshold method based on gray level secondary iteration selection to separate the background; extract the Mark point area from the image after background removal, and then calculate the centroid coordinates of the Mark point ; Due to the preliminary segmentation of the feather impurity area mixed with a small amount of false detection area, then, the feather impurity feature selection and feature matching method are proposed to remove the false detection area and the feather impurity area is sorted out again; for feather impurity positioning, the present invention proposes Mark points The positioning method converts the coordinates of the feather impurity area in the two-dimensional image to the three-dimensional image, and uses the 3D camera to obtain the height value of each point in the image to obtain the three-dimensional information of the feather impurity area, and finally uses the three-dimensional coordinates of the feather impurity area.

生成3D燕窝杂质模型,方便挑拣燕窝杂质。Generate a 3D bird's nest impurity model, which is convenient for picking out bird's nest impurities.

3.1,图像预处理;3.1, image preprocessing;

图像预处理,是燕窝识别过程中非常重要的一个环节。图像采集时常受到各种噪声及周边环境的干扰,不仅影响了图像的效果,也使得所需的相关信息往往被淹没其中,给后续的特征提取带来不便。为滤除噪声干扰、改善图像质量、突出所需相关信息,在对燕窝杂质识别与检测前需做相关的图像预处理。其中,常用的图像预处理算法有:图像滤波、图像增强、图像分割、形态学处理等。采用图像预处理算法,既使物体的后期处理更为容易,也能取的更好的效果。Image preprocessing is a very important link in the bird's nest recognition process. Image acquisition is often disturbed by various noises and the surrounding environment, which not only affects the effect of the image, but also makes the required relevant information often submerged in it, which brings inconvenience to the subsequent feature extraction. In order to filter out noise interference, improve image quality, and highlight required relevant information, relevant image preprocessing is required before identification and detection of bird's nest impurities. Among them, commonly used image preprocessing algorithms include: image filtering, image enhancement, image segmentation, morphological processing, etc. The image preprocessing algorithm is adopted, which not only makes the post-processing of the object easier, but also achieves better results.

3.1.1,图像滤波;3.1.1, image filtering;

图像滤波的作用是尽量保留采集到的图像中细节特征,消除或削弱混入目标图像中的无用信息。图像滤波是一种可以丰富图像信息量,改善图像质量,增强图像识别效果的处理方法,其处理效果直接影响后续的图像处理过程,且与特征识别环节的有效性和可靠性密切相关,是图像预处理过程中不可或缺的一个重要步骤。The function of image filtering is to preserve the detailed features of the collected images as much as possible, and eliminate or weaken the useless information mixed into the target image. Image filtering is a processing method that can enrich the amount of image information, improve image quality, and enhance the effect of image recognition. Its processing effect directly affects the subsequent image processing process, and is closely related to the effectiveness and reliability of the feature recognition link. An essential step in the preprocessing process.

在燕窝图像传输和处理过程中,常受各种噪声污染,出现图像的暗点或亮点干扰,在降低图像质量的同时,还影响图像处理中特征提取的准确度。因此,通常会选取有效的图像滤波算法解决噪声带来的影响。常用的图像滤波算法有:频域滤波法、空间域中的均值滤波法以及中值滤波法等。In the process of bird's nest image transmission and processing, it is often polluted by various noises, and dark spots or bright spots of the image appear, which not only reduces the image quality, but also affects the accuracy of feature extraction in image processing. Therefore, an effective image filtering algorithm is usually selected to solve the impact of noise. Commonly used image filtering algorithms include frequency domain filtering, mean filtering in the spatial domain, and median filtering.

中值滤波是一种非线性的信号处理方法。本质上,中值滤波是一种统计排序滤波器。对原图像中的点(i,j),中值滤波以该点为中心的领域内所有像素的统计排序中值作为(i,j)点的响应。相比于频域滤波和均值滤波算法,中值滤波不仅运算速度快,对孤立噪声像素(如脉冲噪声、椒盐噪声)具有非常好滤波效果,且处理后图像比较清晰,还能有效保留图像的有用边缘信息。Median filtering is a nonlinear signal processing method. Essentially, median filtering is a statistical sorting filter. For the point (i, j) in the original image, the median filter takes the statistical ranking median of all pixels in the area centered on this point as the response of point (i, j). Compared with frequency domain filtering and mean filtering algorithms, median filtering not only has a fast operation speed, but also has a very good filtering effect on isolated noise pixels (such as impulse noise, salt and pepper noise), and the processed image is relatively clear, and it can effectively preserve the image quality. Useful marginal information.

中值滤波算法是一种邻域运算,计算过程是对图像进行中值滤波处理时,将模板上的对应值按由小到大顺序排列,然后将这列数据的中间值赋给模板中心位置的像素点。其中,若模块有奇数个点,将按大小顺序排列后的中间像素点的灰度值作为中间值;当模板有偶数个点时,把按大小顺序排列后的灰度值位于中间的两个值的平均值作为中间值,其实现方法如图4所示。实际应用中,需要结合实际情况来选取模板形状和尺寸。The median filtering algorithm is a neighborhood operation. The calculation process is to arrange the corresponding values on the template in ascending order when performing median filtering on the image, and then assign the median value of this column of data to the center position of the template. of pixels. Among them, if the module has an odd number of points, the gray value of the middle pixel arranged in order of size is used as the middle value; when the template has an even number of points, the gray value of the middle pixel arranged in order of size is placed in the middle The average value of the value is used as the intermediate value, and its realization method is shown in Figure 4. In practical applications, it is necessary to select the shape and size of the template based on the actual situation.

由于中值滤波效果依赖于滤波窗口的大小,太大会使边缘模糊,太小则去噪效果不理想。本发明对中值滤波算法进行了如下改进:逐行扫描图像,在处理每一个像素时判断该像素是否是滤波窗口所覆盖下邻域像素的极大值或极小值;若是,就采用正常中值滤波算法处理该像素;若不是,则不予理之。本发明采用改进的3×3中值滤波算法进行图像滤波,利用了改进后的中值滤波处理后的效果如图7所示。Since the median filtering effect depends on the size of the filtering window, if it is too large, the edge will be blurred, and if it is too small, the denoising effect will be unsatisfactory. The present invention improves the median filtering algorithm as follows: scan the image line by line, and judge whether the pixel is the maximum or minimum value of the neighborhood pixels covered by the filter window when processing each pixel; if so, use the normal The median filter algorithm processes the pixel; if not, it ignores it. The present invention adopts an improved 3×3 median filtering algorithm for image filtering, and the effect after utilizing the improved median filtering is shown in FIG. 7 .

3.1.2,图像增强;3.1.2, image enhancement;

利用分段线性变换函数增强图像对比度的方法,实际上是增强原图像各部分间反差,也即增强输入图像中感兴趣的灰度区域,相对抑制那些不感兴趣的灰度区域。分段线性变换优势主要在于其形式可任意合成。The method of using the piecewise linear transformation function to enhance the contrast of the image is actually to enhance the contrast between the parts of the original image, that is, to enhance the gray-scale areas of interest in the input image, and relatively suppress those gray-scale areas that are not of interest. The main advantage of piecewise linear transformation is that its form can be synthesized arbitrarily.

分段线性变换函数的形式如下所示:The form of the piecewise linear transformation function is as follows:

(x1,x2)和(y1,y2)是式(3.1)中的主要参数。根据算法函数的描述中,可知x1和x2是限定了处理对象需要转换的灰度级范围,而y1和y2则决定了线性变换的斜率。(x 1 , x 2 ) and (y 1 , y 2 ) are the main parameters in formula (3.1). According to the description of the algorithm function, it can be seen that x 1 and x 2 limit the range of gray levels that the processing object needs to convert, while y 1 and y 2 determine the slope of the linear transformation.

当x1,x2,y1,y2分别取不同值的组合时,其得到变换效果各不相同。其分段线性变换函数的图形如图5所示。When x 1 , x 2 , y 1 , and y 2 take different combinations of values, the obtained transformation effects are different. The graph of its piecewise linear transformation function is shown in Figure 5.

3.2,图像分割;3.2, image segmentation;

图像分割是将图像分割成若干个具有独特特性的、特定的区域,并从中提取出感兴趣目标,通常具有相似性和不连续性。Image segmentation is to divide an image into several specific regions with unique characteristics, and extract objects of interest from them, usually with similarity and discontinuity.

本发明选取阈值分割法,通过选取最佳阈值将背景和物体进行分离,以进行图像分割。阈值分割方法,是通过设定一个合理阈值来进行图像判断,将符合设定阈值范围的那些部分的灰度值置为0,反之置为1,从而将感兴趣目标从图像中分离出来,并生成二值图像。图像阈值分割在图像处理中是不可或缺的重要环节。The present invention selects a threshold value segmentation method, and separates the background and the object by selecting the optimal threshold value to perform image segmentation. The threshold segmentation method is to judge the image by setting a reasonable threshold, and set the gray value of those parts that meet the set threshold range to 0, otherwise set to 1, so as to separate the target of interest from the image, and Generate a binary image. Image threshold segmentation is an indispensable and important link in image processing.

阈值分割是将图像的输入f变换为输出g,其变换如下:Threshold segmentation is to transform the input f of the image into the output g, and the transformation is as follows:

上式中,T为设定阈值,g(i,j)=0表示背景部分的图像元素,g(i,j)=1表示目标物体部分的图像元素(反之亦然)。阈值分割即,对图像f扫描其所有像素,若f(i,j)≥T,则分割后图像的元素g(i,j),就是物体的像素,反之就是背景像素。In the above formula, T is the set threshold, g(i, j)=0 represents the image element of the background part, and g(i, j)=1 represents the image element of the target object part (and vice versa). Threshold segmentation is to scan all the pixels of the image f, if f(i,j)≥T, then the element g(i,j) of the segmented image is the pixel of the object, otherwise it is the background pixel.

阈值分割实现效果如图6所示。The effect of threshold segmentation is shown in Figure 6.

3.3,特征选择和羽毛杂质区域提取;3.3, feature selection and feather impurity area extraction;

经过图像分割得到各种感兴趣的区域之后,可以利用一些简单的区域描绘子作为代表该区域的特征。通常将这些区域特征组合成特征向量以供分类使用。常见的简单区域描绘子如周长、面积、致密性、区域的质心、灰度均值、灰度中值、包含区域的最小矩形、最小或最大灰度级、大于或小于均值的像素数及欧拉数等。After the various regions of interest are obtained through image segmentation, some simple region descriptors can be used as the features representing the region. These regional features are usually combined into feature vectors for classification. Common simple area descriptors such as perimeter, area, compactness, centroid of the area, gray mean, gray median, minimum rectangle containing the area, minimum or maximum gray level, number of pixels greater or less than the mean, and Pull number and so on.

通过特征选择进行燕窝羽毛区域,本发明选用特征的经验值设置LW值为4,S为3.8,面积Area设置在(800,10000)。识别结果如图8所示,其中a图为2D燕窝原图,b图为燕窝杂质区域。Carry out bird's nest feather area by feature selection, the present invention selects the empirical value of feature and sets LW value as 4, S is 3.8, and area Area is set at (800,10000). The recognition results are shown in Figure 8, where a is the original 2D bird's nest image, and b is the bird's nest impurity area.

3.4,燕窝羽毛杂质区域定位;3.4, Positioning of bird's nest feather impurity area;

(1)羽毛杂质区域分类。对识别出来的羽毛杂质定位时,需要对其分类与修正。利用欧式距离去判断各羽毛杂质区域质心到自身区域最小距离点间值d,可将各羽毛杂质区域分为两类。通过质心公式(4.4)、(4.5)求出羽毛杂质区域质心坐标为(R,C),欧式距离公式如下:(1) Classification of feather impurity areas. When locating the identified feather impurities, it needs to be classified and corrected. Using the Euclidean distance to judge the value d between the centroid of each feather impurity area and the minimum distance point of its own area, each feather impurity area can be divided into two categories. The centroid coordinates of the feather impurity area are calculated as (R, C) through the centroid formulas (4.4) and (4.5), and the Euclidean distance formula is as follows:

若d=0,该羽毛区域属于第一类,即质心落在羽毛区域里面;若d≠0,该羽毛区域属于第二类,质心落在羽毛区域外面。其中第一类质心落在羽毛区域正是本发明所求,而第二类羽毛区域则需进行修正,将落在第二类羽毛区域外质心通过修正算法,获取新的落在羽毛区域里面的点(ic,jc)。If d=0, the feather region belongs to the first category, that is, the centroid falls inside the feather region; if d≠0, the feather region belongs to the second category, and the centroid falls outside the feather region. Wherein the first type of centroid falls in the feather area is exactly what the present invention seeks, and the second type of feather area needs to be corrected, and the centroid that falls outside the second type of feather area will be passed through the correction algorithm to obtain the new falling in the feather area. point (i c , j c ).

(2)质心修正。为了修正羽毛区域外质心,本发明引入最小外接椭圆长半轴的直线延长法。算法过程如下:(2) Center of mass correction. In order to correct the outer centroid of the feather region, the present invention introduces the linear extension method of the major semiaxis of the smallest circumscribed ellipse. The algorithm process is as follows:

Step1:求取属于第二类羽毛区域最小外接椭圆,从而获得各个区域最小外接椭圆长半轴a;Step1: Obtain the minimum circumscribed ellipse of the feather area belonging to the second category, so as to obtain the major semiaxis a of the minimum circumscribed ellipse of each area;

Step2:以区域外质心(R,C)为起点,质心到区域最小距离点(i1,j1)为终点,连接两点得到线段L;Step2: Starting from the center of mass (R, C) outside the area, and ending at the minimum distance point (i 1 , j 1 ) from the center of mass to the area, connect the two points to obtain a line segment L;

Step3:以质心到区域最小距离点(i1,j1)为起点,作直线L延长线,延长长度为a得到新直线M;Step3: Take the center of mass to the minimum distance point (i 1 , j 1 ) as the starting point, draw the extension line of straight line L, and the extension length is a to obtain a new straight line M;

Step4:求取新直线M与各羽毛区域交点(m,n),计算点(m,n)和质心到区域最小距离点(i1,j1)的中心点(p,q),该点即为修正后所求点。Step4: Calculate the intersection point (m, n) of the new straight line M and each feather area, calculate the point (m, n) and the center point (p, q) of the centroid to the minimum distance point (i 1 , j 1 ) of the area, the point It is the point after correction.

各羽毛杂质区域质心点如图9所示。The centroid points of each feather impurity area are shown in Figure 9.

将燕窝杂质区域挑选出来,然后生成2D燕窝杂质区域坐标,如图8(b)所示,再根据Mark点换算公式,将2D杂质区域坐标换算成3D燕窝杂质区域坐标,生成3D燕窝杂质坐标,根据这些坐标点生成3D燕窝杂质区域。生成的过程如图10所示,其中a图中的2D黑色框内为原燕窝杂质区域,b图为2D灰色框内为燕窝杂质区域,c图为3D燕窝杂质区域。Select the bird’s nest impurity area, and then generate 2D bird’s nest impurity area coordinates, as shown in Figure 8(b), and then convert the 2D impurity area coordinates into 3D bird’s nest impurity area coordinates according to the Mark point conversion formula to generate 3D bird’s nest impurity coordinates, Generate a 3D bird's nest impurity area based on these coordinate points. The generation process is shown in Figure 10, where the 2D black frame in a is the original bird’s nest impurity area, b is the 2D gray frame is the bird’s nest impurity area, and c is the 3D bird’s nest impurity area.

3.5,3D燕窝杂质重建;3.5, 3D bird's nest impurities reconstruction;

面阵相机2D图像与深度相机3D图像的立体配准是利用空间几何坐标转换关系找到两幅图像之间的像素点坐标的对应关系。首先要分别对面阵相机的2D燕窝图像和3D相机的燕窝图像提取Mark区域及Mark区域的质心坐标;然后再对面阵相机的两幅或多幅图像进行羽毛杂质特征提取,得到燕窝羽毛杂质区域的质心坐标;然后根据Mark点换算公式求出匹配的3D燕窝图像中的燕窝羽毛杂质特征点区域;最后完成图像的匹配,生成3D模型。The stereoscopic registration of the 2D image of the area array camera and the 3D image of the depth camera is to use the spatial geometric coordinate transformation relationship to find the corresponding relationship of pixel coordinates between the two images. First, extract the Mark area and the centroid coordinates of the Mark area from the 2D bird’s nest image of the area array camera and the bird’s nest image of the 3D camera; Coordinates of the center of mass; then calculate the bird's nest feather impurity feature point area in the matched 3D bird's nest image according to the Mark point conversion formula; finally complete the image matching and generate a 3D model.

3.5.1,燕窝羽毛杂质点云数据的获取及燕窝杂质模型重建3.5.1, Acquisition of bird's nest feather impurity point cloud data and bird's nest impurity model reconstruction

根据图10中生成的3D燕窝杂质区域,对3D燕窝原图像通过相应的分割,生成指定燕窝羽毛杂质区域图像,然后在将其3D燕窝羽毛杂质区域图像分解为包含三维坐标的X、Y、Z坐标点信息的图像,通过将三维点X、Y、Z图像转化为3D燕窝羽毛杂质点云,得到的仅是离散的燕窝杂质表面的三维特征点,为了重建出燕窝杂质表面,还要对其进行三角化,最终重构出燕窝羽毛杂质的表面。燕窝羽毛杂质重建过程如图11所示。According to the 3D bird's nest impurity area generated in Figure 10, the original image of the 3D bird's nest is segmented accordingly to generate an image of the designated bird's nest feather impurity area, and then the 3D bird's nest feather impurity area image is decomposed into X, Y, Z containing three-dimensional coordinates For the image of the coordinate point information, by converting the three-dimensional point X, Y, Z images into a 3D bird’s nest feather impurity point cloud, only the discrete three-dimensional feature points on the bird’s nest impurity surface are obtained. In order to reconstruct the bird’s nest impurity surface, it is necessary to Perform triangulation, and finally reconstruct the surface of the bird's nest feather impurities. The reconstruction process of bird's nest feather impurities is shown in Figure 11.

3.5.2,燕窝羽毛杂质特征识别;3.5.2, Feature identification of bird's nest feather impurities;

利用Mark点公式求得两幅面阵相机燕窝图像和3D相机燕窝图像之间的对应关系,求得的所有的燕窝杂质区域的点在3D燕窝图像中的三维表示。Use the Mark point formula to obtain the corresponding relationship between the bird's nest images of the two area array cameras and the bird's nest image of the 3D camera, and obtain the three-dimensional representation of all points in the bird's nest impurity area in the 3D bird's nest image.

根据上述生成的3D燕窝杂质区域,将3D燕窝原图像减少到指定燕窝羽毛杂质区域图像,然后在将其3D燕窝羽毛杂质区域图像分解为包含3D点的x,y,z坐标图像,其中的z图像为高度图像。将z图像作为处理对象,运用质心公式求出3D燕窝羽毛杂质坐标(x,y),三维图像中每一个二维坐标点都对应一个固定高度值Z。由于z图像的灰度值就是其图像的高度值Z,可以先求以z图像中质心坐标为圆心分别取以燕窝羽毛杂质区域最小圆环,最小圆环+30个像素点,最小圆环+60个像素点的圆差值间的的灰度值均值Mean和偏差Deviation,可以用以下公式描述:According to the 3D bird's nest impurity area generated above, the original 3D bird's nest image is reduced to the designated bird's nest feather impurity area image, and then its 3D bird's nest feather impurity area image is decomposed into x, y, z coordinate images containing 3D points, where z The image is a height image. Taking the z image as the processing object, the centroid formula is used to obtain the 3D bird's nest feather impurity coordinates (x, y). Each two-dimensional coordinate point in the three-dimensional image corresponds to a fixed height value Z. Since the gray value of the z image is the height value Z of the image, we can first find the coordinates of the center of mass in the z image as the center of the circle to take the smallest ring, the smallest ring + 30 pixels, and the smallest ring + The gray value mean and deviation Deviation between the circular difference values of 60 pixels can be described by the following formula:

其中,高度Z:where, height Z:

Z=Mean-Mean1 (3.6)Z=Mean-Mean 1 (3.6)

从而就可以从3D相机中得到各羽毛区域的三维坐标(Row3m,Colm3m,Z)。图12为根据公式(3.4)(3.5)计算出来的高度值。Thus, the three-dimensional coordinates (Row3m, Colm3m, Z) of each feather area can be obtained from the 3D camera. Figure 12 shows the height values calculated according to formulas (3.4) (3.5).

基于3D特征的识别方法主要思路是:利用3D相机得到燕窝图像的深度信息,再由深度信息得到3D点云模型,接着从3D点云模型中提取3D特征描述子如、目标大小、形状、边界等,最后利用这些3D特征进行目标识别。基于3D特征的方法具有较高的识别精度和鲁棒性,能够实现多个目标的同时识别。The main idea of the recognition method based on 3D features is: use the 3D camera to obtain the depth information of the bird’s nest image, and then obtain the 3D point cloud model from the depth information, and then extract the 3D feature descriptors such as target size, shape, boundary from the 3D point cloud model etc., and finally use these 3D features for target recognition. The method based on 3D features has high recognition accuracy and robustness, and can realize simultaneous recognition of multiple targets.

四、实验。4. Experiment.

4.1,相机标定;4.1, camera calibration;

本发明使用面阵相机和3D相机相结合方式进行图像采集,使用传统相机标定方法无法做到面阵相机和3D相机标定要求。经过反复实验,本发明提出Mark点方法用以二维图像坐标和三维图像坐标间计算,无需传统相机标定做法就可以实现定位要求。由于大部分相机镜头都存在一定的相机畸变,因此本节先进行相机畸变校正的分析。The present invention uses a combined method of an area array camera and a 3D camera to collect images, and traditional camera calibration methods cannot meet the calibration requirements of the area array camera and the 3D camera. After repeated experiments, the present invention proposes a Mark point method for calculation between two-dimensional image coordinates and three-dimensional image coordinates, which can realize positioning requirements without traditional camera calibration methods. Since most camera lenses have certain camera distortion, this section first analyzes the camera distortion correction.

4.1.1,畸变校正;4.1.1, distortion correction;

通常视觉系统中使用的相机镜头都存在不同程度畸变,像素离中心的远近会影响图像畸变程度,距离图像中心越近,畸变就越小。这种畸变属于非线性类型,可用以下公式描述:Generally, the camera lens used in the vision system has different degrees of distortion. The distance between the pixel and the center will affect the degree of image distortion. The closer the pixel is to the center of the image, the smaller the distortion will be. This distortion is of the nonlinear type and can be described by the following formula:

式中,代表无畸变的符合线性成像模型的像素点理想坐标,(x,y)代表实际图像点坐标,δx和δy是非线性畸变值,它与图像点在图像中位置相关,可用下列公式表示:In the formula, Represents the ideal coordinates of pixels that conform to the linear imaging model without distortion, (x, y) represents the actual image point coordinates, δ x and δ y are nonlinear distortion values, which are related to the position of the image point in the image, and can be expressed by the following formula:

其中,δx或者δy的第一项为径向畸变(Radial distortion),第二项为离心畸变(Centrifugal distortion),第三项为薄棱镜畸变(Thin prism)。公式中的系数称为非线性畸变参数。据研究,非线性参数引入过多有时会引起解的不稳定,影响精度提高。通常,非线性畸变足以用上式中的第一项径向畸变来描述。因此,可将公式(4.2)化简为:Among them, the first item of δ x or δ y is radial distortion (Radial distortion), the second item is centrifugal distortion (Centrifugal distortion), and the third item is thin prism distortion (Thin prism). The coefficients in the formula are called nonlinear distortion parameters. According to the research, introducing too many nonlinear parameters will sometimes cause the instability of the solution and affect the improvement of accuracy. Usually, non-linear distortion is sufficient to be described by the first term radial distortion in the above equation. Therefore, formula (4.2) can be simplified as:

可以明显地看出,畸变随着径向半径增大而增大,即离图像中心远的部分畸变较严重。It can be clearly seen that the distortion increases as the radial radius increases, that is, the part far from the center of the image is more severely distorted.

4.1.2,Mark点选取;4.1.2, Mark point selection;

经过反复实验,本发明通过Mark点,将二维图像中检测物体坐标转换到三维图像中,判断转换后的三维坐标是否是检测物体所在位置来实现相机标定目标。After repeated experiments, the present invention converts the coordinates of the detected object in the two-dimensional image into the three-dimensional image through the Mark point, and judges whether the converted three-dimensional coordinates are the position of the detected object to realize the camera calibration target.

本发明使用面阵相机和3D相机相结合的新方式进行图像采集,Mark点选取尤为重要,它直接影响到系统定位精度。为提高识别精度和速度,Mark点应选取颜色均匀、形状规则的圆或者矩形,为方便识别还应具有一定高度(与燕窝表面齐平)。本发明从几何形状和颜色两方面进行Mark点选取,图13和图14中的圆形和椭圆区域为本发明的Mark点。The present invention uses a new way of combining an area array camera and a 3D camera to collect images, and the selection of Mark points is particularly important, which directly affects the positioning accuracy of the system. In order to improve the recognition accuracy and speed, the mark point should be selected as a circle or rectangle with uniform color and regular shape, and it should have a certain height (flush with the surface of the bird’s nest) for easy recognition. The present invention selects Mark points from two aspects of geometric shape and color, and the circle and ellipse areas in Fig. 13 and Fig. 14 are Mark points of the present invention.

Mark点和检测物体存在灰度及形状差异明显的特点,本发明先用灰度阈值方法将Mark点所在区域分割出来,再用特征提取方法将Mark点识别出来。The Mark point and the detected object have the characteristics of obvious differences in grayscale and shape. The present invention first uses the grayscale threshold method to segment the area where the Mark point is located, and then uses the feature extraction method to identify the Mark point.

4.1.3,Mark点质心坐标计算;4.1.3, Mark point centroid coordinate calculation;

利用上述Mark点识别方法分别识别出二维图像和三维图像中的Mark点,然后计算各自的Mark点质心坐标。对一幅2D离散化数字图像图像,f(x,y)≥0,p+q阶矩Mpq和中心矩μpq定义为:The mark points in the two-dimensional image and the three-dimensional image are recognized respectively by using the above-mentioned mark point recognition method, and then the centroid coordinates of the respective mark points are calculated. For a 2D discretized digital image, f(x,y)≥0, p+q order moment M pq and central moment μ pq are defined as:

式中,(ic,jc)为质心坐标,且where (i c , j c ) are the coordinates of the center of mass, and

由上述公式求出二维图像和三维图像的Mark点质心坐标,分别为(i1,j1)和(i2,j2)。Calculate the centroid coordinates of the Mark point of the 2D image and the 3D image from the above formula, which are (i 1 , j 1 ) and (i 2 , j 2 ), respectively.

其中,二维Mark点质心和三维Mark点质心分别如图15中圆形中心和图16椭圆中心的黑色点所示。Among them, the center of mass of the two-dimensional Mark point and the center of mass of the three-dimensional Mark point are respectively shown as the circle center in Figure 15 and the black point at the center of the ellipse in Figure 16 .

4.1.4,图像坐标转换Mark点换算公式;4.1.4, image coordinate transformation Mark point conversion formula;

根据Mark点坐标,经坐标转换到二维图像中检测物体的坐标转换到三维图像中,若经转换得到的三维图像的坐标点所在位置就是检测物体位置,则平面相机和3D相机间标定完成。标定过程如下:According to the coordinates of the Mark point, the coordinates of the detected object in the two-dimensional image are transformed into the three-dimensional image. If the coordinate point of the converted three-dimensional image is the position of the detected object, the calibration between the plane camera and the 3D camera is completed. The calibration process is as follows:

Step1:找出2D图像Mark点坐标(i1,j1)与2D图像杂质区域坐标(Rows,Columns);Step1: Find out the coordinates of the 2D image Mark point (i 1 , j 1 ) and the coordinates of the 2D image impurity area (Rows, Columns);

Step2:求出2D图像中Mark点坐标与2D燕窝杂质区域坐标的行距与列距:Step2: Calculate the line and column distances between the Mark point coordinates in the 2D image and the coordinates of the 2D bird's nest impurity area:

Row=Rows-i1Row=Rows-i1

Col=Columns-j1 (4.6)Col=Columns-j1 (4.6)

Step3:计算2D图像与3D图像之间的图像长之比与宽之比:Step3: Calculate the image length ratio and width ratio between the 2D image and the 3D image:

(L2D为2D图像长,L3D为3D图像长) (L 2D is the length of 2D image, L 3D is the length of 3D image)

Step4:利用Mark点计算出3D燕窝杂质区域行坐标和列坐标:Step4: Use the Mark points to calculate the row and column coordinates of the 3D bird's nest impurity area:

Row3m=KL*i2+RowRow3m=KL*i2+Row

Colm3m=KW*j2+Col (4.8)Colm3m=KW*j2+Col (4.8)

Steps5:得到的Row3m,Colm3m就是三维图像中的各羽毛区域二维坐标;Steps5: The obtained Row3m and Colm3m are the two-dimensional coordinates of each feather area in the three-dimensional image;

Steps6:三维图像中每一个二维坐标点都对应一个固定高度值z,从而就可以从3D相机中得到各羽毛区域的三维坐标(Row3m,Colm3m,Z)。Steps6: Each two-dimensional coordinate point in the three-dimensional image corresponds to a fixed height value z, so that the three-dimensional coordinates (Row3m, Colm3m, Z) of each feather area can be obtained from the 3D camera.

在此,设定一个允许最大偏差σ=0.5mm。若求出三维坐标(Row3m,Colm3m,Z)在允许最大偏差内,则相机标定完成。否则,重新对相机镜头进行畸变校正,然后返回步骤1重新开始计算,且由于允许最大偏差的精度为0.1mm,因此,为降低计算误差,保留重新计算结果的精度为0.01mm。Here, set a maximum allowable deviation σ=0.5mm. If the calculated three-dimensional coordinates (Row3m, Colm3m, Z) are within the allowable maximum deviation, the camera calibration is completed. Otherwise, re-correct the distortion of the camera lens, and then return to step 1 to restart the calculation, and since the accuracy of the maximum deviation allowed is 0.1mm, in order to reduce the calculation error, the accuracy of the recalculation result is retained at 0.01mm.

4.2,实验结果;4.2, Experimental results;

4.2.1,Mark点质心坐标;4.2.1, Mark point centroid coordinates;

运用质心公式计算二维图像Mark点和三维图像Mark点质心分别为(见下表4-1):Use the centroid formula to calculate the centroids of the two-dimensional image Mark point and the three-dimensional image Mark point respectively (see Table 4-1 below):

表4-1,Mark点质心坐标Table 4-1, Mark point centroid coordinates

Mark点Mark point 质心坐标centroid coordinates 二维(i1,j1)2D(i1,j1) (469.539,976.531)(469.539, 976.531) 三维(i2,j2)3D(i2, j2) (172.196,1352.08)(172.196, 1352.08)

4.2.2,燕窝杂质区域质心坐标;4.2.2, the centroid coordinates of the bird's nest impurity area;

创建两个空数组R和C,分别用来存放羽毛区域质心的行坐标和列坐标。Create two empty arrays R and C, which are used to store the row coordinates and column coordinates of the centroid of the feather area respectively.

在2D燕窝图中检测出20个燕窝羽毛杂质区域,其区域质心坐标如下(R,C)所示,其中:In the 2D bird's nest image, 20 bird's nest feather impurity areas are detected, and the coordinates of the center of mass of the area are as follows (R, C), where:

R=[1159.88,1179.29,1407.11,1712.31,1708.69,1752.07,2205.47,2275.48,2215.88,2304.67,970.077,986.445,1005.75,1397.07,1445.52,1494.47,1534.48,1568.59,1669.79,2103.62,2224.48];R=[1159.88,1179.29,1407.11,1712.31,1708.69,1752.07,2205.47,2275.48,2215.88,2304.67,970.077,986.445,1005.75,1397.07,1445.52,1494.47,1534.48,1568.59,1669.79,2103.62,2224.48];

C=[930.488,836.064,1724.77,857.697,1532.5,1157.49,1510.98,1126.13,1429.93,840.633,1156.68,1250.89,1079.05,707.473,417.415,557.804,274.456,359.435,265.062,755.561,599.084];C=[930.488,836.064,1724.77,857.697,1532.5,1157.49,1510.98,1126.13,1429.93,840.633,1156.68,1250.89,1079.05,707.473,417.415,557.804,274.456,359.435,265.062,755.561,599.084];

4.2.3,燕窝杂质区域坐标;4.2.3, coordinates of bird’s nest impurity area;

创建两个空数组Rows和Columns,分别用来存放羽毛区域质心的行坐标和列坐标。Create two empty arrays Rows and Columns, which are used to store the row coordinates and column coordinates of the centroid of the feather area respectively.

在2D燕窝图中检测出20个燕窝羽毛杂质区域,其区域坐标大于7万个,所以下面提取出其中一个区域的50个坐标,如下所示(Rows,Columns),其中:In the 2D bird's nest map, 20 bird's nest feather impurity areas are detected, and the area coordinates are greater than 70,000. Therefore, 50 coordinates of one of the areas are extracted below, as shown below (Rows, Columns), where:

Rows=[949,949,949,949,949,949,950,950,950,950,950,950,950,950,950,950,950,950,950,950,950,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,952,952,952,952,952,952,952,952];Rows=[949,949,949,949,949,949,950,950,950,950,950,950,950,950,950,950,950,950,950,950,950,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,951,952,952,952,952,952,952,952,952];

Columns=[1138,1139,1171,1172,1173,1174,1135,1136,1137,1138,1139,1140,1141,1170,1171,1172,1173,1174,1175,1176,1177,1135,1136,1137,1138,1139,1140,1141,1142,1169,1170,1171,1172,1173,1174,1175,1176,1177,1178,1179,1180,1181,1134,1135,1136,1137,1138,1139,1140,1141]Columns=[1138,1139,1171,1172,1173,1174,1135,1136,1137,1138,1139,1140,1141,1170,1171,1172,1173,1174,1175,1176,1177,1135,1136,1137 ,1138,1139,1140,1141,1142,1169,1170,1171,1172,1173,1174,1175,1176,1177,1178,1179,1180,1181,1134,1135,1136,1137,1138,1139,1140 ,1141]

其中3D燕窝羽毛杂质区域坐标可以根据Mark点换算公式计算得出,创建两个空数组Rows3m和Columns3m,分别用来存放3D燕窝图中的羽毛区域质心的行坐标和列坐标。Among them, the coordinates of the 3D bird’s nest feather impurity area can be calculated according to the Mark point conversion formula, and two empty arrays Rows3m and Columns3m are created to store the row coordinates and column coordinates of the centroid of the feather area in the 3D bird’s nest image respectively.

4.2.4,燕窝杂质3D模型;4.2.4, 3D model of bird's nest impurities;

利用已经获取的燕窝羽毛杂质区域3D坐标(Row3m,Colm3m,Z),生成如图17所示的燕窝杂质3D模型图,其中a为生成的燕窝杂质3D模型原图,b为燕窝杂质3D模型右转90°图,c为燕窝杂质3D模型左转90°图。Using the acquired 3D coordinates (Row3m, Colm3m, Z) of the bird’s nest feather impurity area, generate the bird’s nest impurity 3D model diagram shown in Figure 17, where a is the original image of the bird’s nest impurity 3D model, and b is the right side of the bird’s nest impurity 3D model Turn 90°, c is the 3D model of bird’s nest impurities turned 90° to the left.

从图17中可以看出燕窝杂质的具体位置及面积大小,方便燕窝杂质挑拣。但是这只是燕窝表层的杂质,燕窝内部没有裸露在表面的羽毛杂质并没有检测出来。所以这个可以将已经分拣了表层的燕窝重复利用面阵相机和3D相机结合的方法进行燕窝内部杂质图像采集,然后对图像进行上述同样的方法处理,将燕窝羽毛杂质区域分拣出来。From Figure 17, we can see the specific location and size of the impurities in the bird’s nest, which is convenient for picking out the impurities in the bird’s nest. But this is only the impurities on the surface of the bird's nest, and the feather impurities that are not exposed on the surface of the bird's nest are not detected. So this can reuse the method of combining the surface array camera and 3D camera to collect the image of the bird's nest internal impurities, and then process the image in the same way as above to sort out the bird's nest feather impurity area.

4.3,实验分析;4.3, Experimental analysis;

本发明对于燕窝羽毛杂质识别与检测算法的各个流程进行了相关的实验,经实验的验证分析,证明融合2D和3D图像燕窝杂质分拣的方法检测精度高,性能能够达到预期效果以及实际的燕窝加工要求。The present invention has carried out relevant experiments on the various processes of bird's nest feather impurity identification and detection algorithms. Through the verification and analysis of the experiment, it is proved that the method of fusion of 2D and 3D image bird's nest impurities sorting has high detection accuracy, and the performance can achieve the expected effect and the actual bird's nest Processing requirements.

本发明能够提高燕窝羽毛杂质分拣的工作效率,有效的降低燕窝的生产成本;比人工挑拣的燕窝产品精度高,可以进行长时间稳定的燕窝羽毛杂质挑拣工作;引入本发明方法后可以提高燕窝产品的质量,降低燕窝产品的漏检率和误检率,得到可靠、稳定而且准确的检测燕窝产品;与此同时,降低工人的劳动强度,减少工人眼睛、颈椎和身心的损坏;还可提高工作效率,降低劳动成本,大大增强企业在市场上的竞争力。The invention can improve the work efficiency of bird’s nest feather impurities sorting, and effectively reduce the production cost of bird’s nest; it has higher precision than manual picking of bird’s nest products, and can carry out long-term and stable bird’s nest feather and impurity sorting work; after introducing the method of the invention, the bird’s nest The quality of the product can reduce the missed detection rate and false detection rate of bird’s nest products, and obtain reliable, stable and accurate detection of bird’s nest products; at the same time, it can reduce the labor intensity of workers and reduce the damage to workers’ eyes, cervical spine and physical and mental health; it can also improve Improve work efficiency, reduce labor costs, and greatly enhance the competitiveness of enterprises in the market.

上述为本发明较佳的实施方式,但本发明的实施方式并不受上述内容的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above is a preferred embodiment of the present invention, but the embodiment of the present invention is not limited by the above content, and any other changes, modifications, substitutions, combinations, and simplifications that do not deviate from the spirit and principles of the present invention are all Replacement methods that should be equivalent are all included within the protection scope of the present invention.

Claims (3)

1. a kind of bird's nest impurity method for sorting for merging 2D and 3D rendering, which is characterized in that include the following steps:
S1, the reconstruct and identification of bird's nest impurity;
S1.1, image preprocessing;
Image preprocessing, is a very important link in bird's nest identification process, when Image Acquisition be subjected to various noises and The interference of surrounding enviroment not only affects the effect of image, but also required relevant information is often submerged wherein, to subsequent Feature extraction make troubles;For filter out noise jamming, improving image quality, it is prominent needed for relevant information, to bird's nest impurity It needs to do relevant image preprocessing before identification and detection;
S1.1.1, image filtering;
In bird's nest image transmitting and treatment process, often by various noise pollutions, there is the dim spot of image or bright spot interference, dropping While low image quality, the accuracy of feature extraction in image procossing is had an effect on;Therefore, effective Image filter arithmetic is chosen Solving noise bring influences, and common Image filter arithmetic has: mean filter method and intermediate value in frequency domain filtering method, spatial domain Filter method;
The median filtering algorithm is a kind of neighborhood operation, and calculating process is when carrying out median filter process to image, by template On respective value by ascending sequence arrange, then the median of this column data is assigned to the pixel of template center position; Wherein, if template has odd number point, the gray value of the intermediary image vegetarian refreshments after sequence by size is arranged is as median;If template There is even number point, the gray value after sequence arranges by size is located in the middle the average value of two values as median;
It is too big to make edge blurry since median filtering effect depends on the size of filter window, it is too small, it denoises effect and pays no attention to Think, then improve to median filtering algorithm: progressive scanning picture judges whether the pixel is filter when handling each pixel Wave window covers the maximum value or minimum value of lower neighborhood territory pixel;If so, just handling the pixel using normal median filtering algorithm; If it is not, then not managing it;
Image filtering is carried out using improved 3 × 3 median filtering algorithm;
S1.1.2, image enhancement;
Enhance picture contrast using piecewise linear transform function, actually contrast between enhancing original image each section, that is, enhances Interested gray areas in input picture, it is opposite to inhibit those uninterested gray areas;
The form of the piecewise linear transform function is as follows:
Wherein, (x1,x2) and (y1,y2) it is major parameter in above formula (3.1), it is described according to algorithmic function, it is known that x1And x2It is It limits and deals with objects the grey level range that needs are converted, and y1And y2Then determine the slope of linear transformation;
S1.2, image segmentation;
Image segmentation uses thresholding method, is separated background and object by choosing optimal threshold, to carry out image point It cuts;Image judgement is carried out by one reasonable threshold value of setting, the gray value for meeting those of given threshold range part is set It is 0, otherwise is set as 1, so that interesting target be separated from image, and generates bianry image;
Threshold segmentation is that the input f of image is transformed to output g, is converted as follows:
In above formula, T is given threshold, and g (i, j)=0 indicates that the pictorial element of background parts, g (i, j)=1 indicate target object Partial pictorial element;Threshold segmentation is that is, scan its all pixels to image f, if f (i, j) >=T, the member of image after segmentation Plain g (i, j) is exactly the pixel of object;It otherwise is exactly background pixel;
S1.3, feature selecting and feather extrinsic region extract;
After obtaining various interested regions by image segmentation, it can use sub be used as of simple region description and represent the region Feature, and by these provincial characteristics be combined into feature vector for classification use;
Wherein, it is perimeter, area, compactness, the mass center in region, gray average, gray scale intermediate value, packet that the simple region, which describes son, Minimum rectangle, minimum or maximum gray scale containing region, pixel number and Euler's numbers more than or less than mean value;
S1.4, the positioning of bird's nest feather extrinsic region;
When to the feather impurity positioning identified, need to classify to it and correct;
S1.4.1, the classification of feather extrinsic region;
Judge that each feather extrinsic region mass center, can be miscellaneous by each feather to value d between self zone minimum range point using Euclidean distance Matter region is divided into two classes;Finding out feather extrinsic region center-of-mass coordinate by mass center formula (4.4), (4.5) is (R, C), then European Range formula is as follows:
If d=0, which belongs to the first kind, i.e. mass center is fallen in feather region;If d ≠ 0, which belongs to In the second class, i.e. mass center is fallen in outside feather region;
Wherein, it is exactly required to fall in feather region for first kind mass center, and the second class feather region then needs to be modified, and will fall in the Mass center obtains the new point (i fallen in feather region by correction algorithm outside two class feather regionsc,jc);
S1.4.2, mass center amendment;
In order to correct mass center outside feather region, the straight line for introducing minimum external oval major semiaxis extends algorithm;
S1.5,3D bird's nest impurity are rebuild;
It is to find two using space geometry coordinate transformation relation that area array cameras 2D image is registrated with the solid of depth camera 3D rendering The corresponding relationship of pixel coordinate between width image;First respectively to the bird's nest of the 2D bird's nest image of area array cameras and 3D camera The center-of-mass coordinate in the region image zooming-out Mark and the region Mark;Then feather is carried out to two width or multiple image of area array cameras again Impurity characteristics are extracted, and the center-of-mass coordinate of bird's nest feather extrinsic region is obtained;Then it is found out according to Mark point reduction formula matched Bird's nest feather impurity characteristics point region in 3D bird's nest image;The matching of image is finally completed, 3D model is generated;
S1.5.1, the acquisition and bird's nest impurity Model Reconstruction of bird's nest feather impurity point cloud data;
It is miscellaneous to generate specified bird's nest feather to 3D bird's nest original image by dividing accordingly according to the 3D bird's nest extrinsic region of generation Then its 3D bird's nest feather extrinsic region picture breakdown is X, Y comprising three-dimensional coordinate, Z coordinate point information by matter area image Image, and convert 3D bird's nest feather impure point cloud for three-dimensional point X, Y, Z-image, obtain the three of discrete bird's nest contaminant surface Dimensional feature point;In order to rebuild bird's nest contaminant surface, also trigonometric ratio is carried out to it, finally reconstruct the table of bird's nest feather impurity Face;
S1.5.2, the identification of bird's nest feather impurity characteristics;
The corresponding relationship between two breadth array camera bird's nest images and 3D camera bird's nest image is acquired using Mark point formula, is acquired All bird's nest extrinsic regions three dimensional representation of the point in 3D bird's nest image;
According to the 3D bird's nest extrinsic region of generation, 3D bird's nest original image is reduced to specified bird's nest feather extrinsic region image, so Afterwards again by 3D bird's nest feather extrinsic region picture breakdown be the x comprising 3D point, y, z coordinate image, wherein z image be height map Picture;Using z image as process object, find out 3D bird's nest feather impurity coordinate (x, y) with mass center formula, then it is every in 3-D image One two-dimensional coordinate point all corresponds to a fixed height value Z;Since the gray value of z image is exactly the height value Z of its image, first ask Out using center-of-mass coordinate in z image as the center of circle, taken respectively with bird's nest feather extrinsic region minimum annulus, i.e.+30 pictures of minimum annulus Gray value mean value Mean and deviation D eviation between the circle difference of+60 pixels of vegetarian refreshments and minimum annulus, can be used to lower public affairs Formula description:
Wherein, height Z are as follows:
Z=Mean-Mean1 (3.6)
So as to obtain the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region from 3D camera;
S2, experiment;
S2.1, camera calibration;
S2.1.1, distortion correction;
Camera lens used in usual vision system all exist to distort in various degree, and the excentric distance of pixel will affect image Distortion degree, range image center is closer, and distortion is just smaller, and this distortion belongs to non-linear type, can be described with following formula:
In above formula,The distortionless pixel ideal coordinates for meeting linear imaging model are represented, (x, y) represents practical figure Picpointed coordinate, δxAnd δyIt is nonlinear distortion value, position is related in the picture to picture point for it, it can be indicated with following formula:
Wherein, δxOr δyFirst item be Radial distortion radial distortion, Section 2 Centrifugal Distortion centrifugal distortion, Section 3 are the distortion of Thin prism thin prism, and the coefficient in formula is known as nonlinear distortion ginseng Number;And nonlinear distortion parameter can cause the unstable of solution when introducing excessive, influence precision raising;It therefore, can be by formula (4.2) Abbreviation are as follows:
Thus it is apparent that, distortion increases as radial radius increases, i.e., fractional distortion remote from picture centre is tighter Weight;
S2.1.2, Mark point are chosen;
By Mark point, detection object coordinate in two dimensional image is transformed into 3-D image, the three-dimensional coordinate after judging conversion It whether is detection object position to realize camera calibration target;
Image Acquisition is carried out using the mode that area array cameras and 3D camera combine, Mark point, which is chosen, has certain altitude, color Uniformly, the circle or rectangle of regular shape;
Mark point and detection object have the characteristics that gray scale and shape difference are obvious, first will be where Mark point with gray threshold method Region segmentation comes out, then is identified Mark point with feature extracting method;
S2.1.3, Mark point center-of-mass coordinate calculate;
The Mark point in two dimensional image and 3-D image is identified respectively using above-mentioned Mark point recognition methods, is then calculated respective Mark point center-of-mass coordinate;To a width 2D discretization digital picture, f (x, y) >=0, p+q rank square MpqWith central moment μpqIs defined as:
In above formula, (ic,jc) it is center-of-mass coordinate, and
Therefore, the Mark point center-of-mass coordinate of two dimensional image and 3-D image, respectively (i are found out by above-mentioned formula1,j1) and (i2, j2);
S2.1.4, image coordinate convert Mark point reduction formula;
According to Mark point coordinate, to be converted by coordinate, the coordinate of detection object is transformed into 3-D image in two dimensional image, if through The coordinate points position for the 3-D image being converted to is exactly detection object position, then has demarcated between plane camera and 3D camera At;
S2.2, experimental result;
S2.2.1, Mark point center-of-mass coordinate;
Two dimensional image Mark point and 3-D image Mark point mass center are calculated with mass center formula;
S2.2.2, bird's nest extrinsic region center-of-mass coordinate;
Two empty array R and C are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to;In 2D bird's nest figure Detect 20 bird's nest feather extrinsic regions, region center-of-mass coordinate is (R, C);
S2.2.3, bird's nest extrinsic region coordinate;
Two empty array Rows and Columns are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to;In 2D Detect that 20 bird's nest feather extrinsic regions, area coordinate are (Rows, Columns) in bird's nest figure;
Wherein 3D bird's nest feather extrinsic region coordinate can be calculated according to Mark point reduction formula, create two empty arrays Rows3m and Columns3m is respectively intended to the row coordinate and column coordinate of the feather region mass center in storage 3D bird's nest figure;
S2.2.4, bird's nest impurity 3D model;
Using the bird's nest feather extrinsic region 3D coordinate (Row3m, Colm3m, Z) obtained, bird's nest impurity 3D model is generated Figure;
It can be seen that the specific location and size of bird's nest impurity, facilitate bird's nest impurity to pick;But this is bird's nest surface layer Impurity, not detected without the exposed feather impurity on surface inside bird's nest;So can be by sorted table The method that the bird's nest recycling area array cameras and 3D camera of layer combine carries out impurity Image Acquisition inside bird's nest, then to image Above-mentioned same method processing is carried out, bird's nest feather extrinsic region is sorted out.
2. the bird's nest impurity method for sorting of fusion 2D and 3D rendering according to claim 1, which is characterized in that the step It is specific as follows to extend algorithmic procedure for the straight line of minimum external oval major semiaxis in S1.4.2:
Step1: seeking belonging to the minimum external ellipse in the second class feather region, so that it is minimum external oval long by half to obtain each region Axis a;
Step2: with mass center outside region (R, C) for starting point, mass center to region minimum range point (i1, j1) it is terminal, connection two o'clock obtains To line segment L;
Step3: with mass center to region minimum range point (i1, j1) it is starting point, make straight line L extended line, extending length is that a is obtained newly Straight line M;
Step4: seeking new straight line M and each feather region intersection point (m, n), calculates point (m, n) and mass center to region minimum range point (i1, j1) central point (p, q), the point be correct after required point.
3. the bird's nest impurity method for sorting of fusion 2D and 3D rendering according to claim 1, which is characterized in that the step Calibration process is specific as follows in S2.1.4:
Step1: 2D image Mark point coordinate (i is found out1,j1) and 2D image extrinsic region coordinate (Rows, Columns);
Step2: find out in 2D image the line-spacing of Mark point coordinate and 2D bird's nest extrinsic region coordinate and column away from:
Row=Rows-i1
Col=Columns-j1, (4.6)
Step3: the ratio between the ratio between image length between 2D image and 3D rendering and width are calculated:
(L2DLong, the L for 2D image3DIt is long for 3D rendering)
Step4: 3D bird's nest extrinsic region row coordinate and column coordinate are calculated using Mark point:
Row3m=KL*i2+Row
Colm3m=KW*j2+Col, (4.8)
Steps5: obtained Row3m, Colm3m are exactly each feather region two-dimensional coordinate in 3-D image;
Steps6: each two-dimensional coordinate point corresponds to a fixed height value z in 3-D image, so that it may from 3D camera To the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region;
Here, one permission maximum deviation σ=0.5mm of setting, if finding out three-dimensional coordinate (Row3m, Colm3m, Z) is allowing most In large deviation, then camera calibration is completed;Otherwise, distortion correction is carried out to camera lens again, then returns to Step1 and restarts It calculates, and due to allowing the precision of maximum deviation to be 0.1mm, error is calculated to reduce, retains the essence for recalculating result Degree is 0.01mm.
CN201910282067.1A 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering Pending CN110176020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282067.1A CN110176020A (en) 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282067.1A CN110176020A (en) 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering

Publications (1)

Publication Number Publication Date
CN110176020A true CN110176020A (en) 2019-08-27

Family

ID=67689512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282067.1A Pending CN110176020A (en) 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering

Country Status (1)

Country Link
CN (1) CN110176020A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252710A (en) * 2021-06-16 2021-08-13 北京艾尚燕食品科技有限公司 Bird's nest component detection method and device
CN114022378A (en) * 2021-11-04 2022-02-08 中天智能装备有限公司 Copper strip shielding layer overlapping rate detection method based on vision
CN116309835A (en) * 2023-03-13 2023-06-23 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium for food sorting
CN116721108A (en) * 2023-08-11 2023-09-08 山东奥晶生物科技有限公司 Stevioside product impurity detection method based on machine vision
CN116844142A (en) * 2023-08-28 2023-10-03 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method
CN119007175A (en) * 2024-10-23 2024-11-22 杭州汇萃智能科技有限公司 Feather intelligent recognition method based on industrial camera

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184563A (en) * 2011-03-23 2011-09-14 华中科技大学 Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form
CN103985155A (en) * 2014-05-14 2014-08-13 北京理工大学 Scattered point cloud Delaunay triangulation curved surface reconstruction method based on mapping method
US20160253807A1 (en) * 2015-02-26 2016-09-01 Mitsubishi Electric Research Laboratories, Inc. Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches
TW201643811A (en) * 2015-01-09 2016-12-16 鴻海精密工業股份有限公司 System and method for merging point cloud data
US9578309B2 (en) * 2014-06-17 2017-02-21 Actality, Inc. Adjustable parallax distance, wide field of view, stereoscopic imaging system
CN106651882A (en) * 2016-12-29 2017-05-10 广东工业大学 Method and device for identifying and detecting cubilose impurities based on machine vision
CN107392956A (en) * 2017-06-08 2017-11-24 北京农业信息技术研究中心 Crop root Phenotypic examination method and apparatus
CN108022264A (en) * 2016-11-01 2018-05-11 狒特科技(北京)有限公司 Camera pose determines method and apparatus
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108682033A (en) * 2018-05-29 2018-10-19 石河子大学 A kind of phase safflower filament two-dimensional image center in full bloom point extracting method
CN109544681A (en) * 2018-11-26 2019-03-29 西北农林科技大学 A 3D digitization method of fruit based on point cloud
CN109544456A (en) * 2018-11-26 2019-03-29 湖南科技大学 The panorama environment perception method merged based on two dimensional image and three dimensional point cloud

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184563A (en) * 2011-03-23 2011-09-14 华中科技大学 Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form
CN103985155A (en) * 2014-05-14 2014-08-13 北京理工大学 Scattered point cloud Delaunay triangulation curved surface reconstruction method based on mapping method
US9578309B2 (en) * 2014-06-17 2017-02-21 Actality, Inc. Adjustable parallax distance, wide field of view, stereoscopic imaging system
TW201643811A (en) * 2015-01-09 2016-12-16 鴻海精密工業股份有限公司 System and method for merging point cloud data
US20160253807A1 (en) * 2015-02-26 2016-09-01 Mitsubishi Electric Research Laboratories, Inc. Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches
CN108022264A (en) * 2016-11-01 2018-05-11 狒特科技(北京)有限公司 Camera pose determines method and apparatus
CN106651882A (en) * 2016-12-29 2017-05-10 广东工业大学 Method and device for identifying and detecting cubilose impurities based on machine vision
CN107392956A (en) * 2017-06-08 2017-11-24 北京农业信息技术研究中心 Crop root Phenotypic examination method and apparatus
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108682033A (en) * 2018-05-29 2018-10-19 石河子大学 A kind of phase safflower filament two-dimensional image center in full bloom point extracting method
CN109544681A (en) * 2018-11-26 2019-03-29 西北农林科技大学 A 3D digitization method of fruit based on point cloud
CN109544456A (en) * 2018-11-26 2019-03-29 湖南科技大学 The panorama environment perception method merged based on two dimensional image and three dimensional point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王保云: "基于2维照片构建建筑物三维模型的研究", 《电子技术与软件工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252710A (en) * 2021-06-16 2021-08-13 北京艾尚燕食品科技有限公司 Bird's nest component detection method and device
CN113252710B (en) * 2021-06-16 2021-09-24 北京艾尚燕食品科技有限公司 Bird's nest component detection method and device
CN113866196A (en) * 2021-06-16 2021-12-31 北京艾尚燕食品科技有限公司 Bird's nest composition detecting system
CN114022378A (en) * 2021-11-04 2022-02-08 中天智能装备有限公司 Copper strip shielding layer overlapping rate detection method based on vision
CN116309835A (en) * 2023-03-13 2023-06-23 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium for food sorting
CN116721108A (en) * 2023-08-11 2023-09-08 山东奥晶生物科技有限公司 Stevioside product impurity detection method based on machine vision
CN116721108B (en) * 2023-08-11 2023-11-03 山东奥晶生物科技有限公司 Stevioside product impurity detection method based on machine vision
CN116844142A (en) * 2023-08-28 2023-10-03 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method
CN116844142B (en) * 2023-08-28 2023-11-21 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method
CN119007175A (en) * 2024-10-23 2024-11-22 杭州汇萃智能科技有限公司 Feather intelligent recognition method based on industrial camera

Similar Documents

Publication Publication Date Title
CN110176020A (en) A kind of bird's nest impurity method for sorting merging 2D and 3D rendering
CN109636824B (en) Multi-target counting method based on image recognition technology
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN108920992B (en) A positioning and recognition method of medical label barcode based on deep learning
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN106446952B (en) A kind of musical score image recognition methods and device
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN107369159B (en) Threshold segmentation method based on multi-factor two-dimensional gray level histogram
CN110070570A (en) A kind of obstacle detection system and method based on depth information
CN112861654B (en) A method for obtaining location information of famous and high-quality tea picking points based on machine vision
CN115082419A (en) Blow-molded luggage production defect detection method
WO2018018788A1 (en) Image recognition-based meter reading apparatus and method thereof
CN109583376B (en) Ancient ceramic source breaking and generation breaking method based on multi-feature information fusion
CN104778701A (en) A local image description method based on RGB-D sensor
CN115731257A (en) Image-based Leaf Shape Information Extraction Method
CN109559324A (en) A kind of objective contour detection method in linear array images
CN113409334B (en) Centroid-based structured light angle point detection method
CN106651882A (en) Method and device for identifying and detecting cubilose impurities based on machine vision
CN110648330B (en) Defect detection method for camera glass
CN115147613B (en) A method for infrared small target detection based on multi-directional fusion
CN114324078A (en) Particle size identification method, device, equipment and medium
CN106446925A (en) Dolphin identity recognition method based on image processing
CN113554695A (en) Intelligent part hole site identification and positioning method
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN106127750B (en) A kind of CT images body surface extracting method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190827

WD01 Invention patent application deemed withdrawn after publication