CN113743523B - Building rubbish fine classification method guided by visual multi-feature - Google Patents
Building rubbish fine classification method guided by visual multi-feature Download PDFInfo
- Publication number
- CN113743523B CN113743523B CN202111071050.5A CN202111071050A CN113743523B CN 113743523 B CN113743523 B CN 113743523B CN 202111071050 A CN202111071050 A CN 202111071050A CN 113743523 B CN113743523 B CN 113743523B
- Authority
- CN
- China
- Prior art keywords
- classification
- color
- visual
- features
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
一种视觉多特征引导的建筑垃圾精细分类算法,包括以下步骤;步骤1)通过视觉传感器采集样本并预处理;步骤2):基于图像的全局信息,提取样本的颜色特征并编码;步骤3):基于图像的局部信息,提取样本的纹理特征并编码;步骤4):构建建筑物料的颜色‑纹理统计特征;步骤5):将颜色特征和颜色‑纹理统计特征分别输入分类器,训练分类模型,获得两个来自不同空间的证据矩阵;步骤6):根据证据矩阵设计有效的决策模型,输出决策矩阵;步骤7):根据决策矩阵中概率最大者的类别确定待分拣物料的标签。本发明能够有效的决策分类模型,确定物料的类别标签,实现对目标的精准识别。
A visual multi-feature guided construction waste fine classification algorithm comprises the following steps: step 1) collecting samples through visual sensors and preprocessing; step 2): extracting and encoding the color features of the samples based on the global information of the image; step 3): extracting and encoding the texture features of the samples based on the local information of the image; step 4): constructing the color-texture statistical features of the construction materials; step 5): inputting the color features and the color-texture statistical features into the classifier respectively, training the classification model, and obtaining two evidence matrices from different spaces; step 6): designing an effective decision model according to the evidence matrix, and outputting the decision matrix; step 7): determining the label of the material to be sorted according to the category with the highest probability in the decision matrix. The present invention can effectively make a decision classification model, determine the category label of the material, and realize accurate recognition of the target.
Description
技术领域Technical Field
本发明属于模式识别与机器视觉技术领域,特别涉及一种视觉多特征引导的建筑垃圾精细分类方法。The invention belongs to the technical field of pattern recognition and machine vision, and in particular relates to a method for finely classifying construction waste guided by visual multi-features.
背景技术Background technique
现有的回收设备大多经过除铁、破碎、筛分、磁选等工序,效率低、操作工序复杂。同时,人工分拣的存在提高了资源再利用的成本。此外,恶劣的工作环境将影响工作人员的身体健康。因此建立高效,快速的分拣设备已成为提高建筑垃圾利用率的重要途径。Most of the existing recycling equipment has gone through processes such as iron removal, crushing, screening, and magnetic separation, which are inefficient and have complicated operating procedures. At the same time, the existence of manual sorting increases the cost of resource recycling. In addition, the harsh working environment will affect the health of the staff. Therefore, establishing efficient and fast sorting equipment has become an important way to improve the utilization rate of construction waste.
近十年来随着机器视觉技术的发展,模式识别理论凭借其独特的优势被应用于各行业。在建筑垃圾识别领域,部分学者提出使用颜色特征对密度相近的砖块和石块分离。也有学者提出使用体积、重量等特征对建筑物料分类。但上述方法都针对两类建筑物料分类问题提出,可扩展性差,无法满足工业上对多种物质分类的实际需求。With the development of machine vision technology in the past decade, pattern recognition theory has been applied to various industries with its unique advantages. In the field of construction waste identification, some scholars have proposed using color features to separate bricks and stones with similar densities. Some scholars have also proposed using volume, weight and other features to classify building materials. However, the above methods are all proposed for the classification of two types of building materials, with poor scalability and unable to meet the actual needs of industry for the classification of multiple substances.
发明内容Summary of the invention
为了克服上述现有技术的不足,本发明的目的在于提供一种视觉多特征引导的建筑垃圾精细分类方法,该方法基于模式识别、机器视觉理论,提取建筑物料的信息并构建新的显著特征,实现了对多种类物料的精确分类。In order to overcome the shortcomings of the above-mentioned prior art, the purpose of the present invention is to provide a method for fine classification of construction waste guided by visual multi-features. The method is based on pattern recognition and machine vision theory, extracts information of building materials and constructs new significant features, thereby realizing accurate classification of multiple types of materials.
为了实现上述目的,本发明采用的技术方案是:In order to achieve the above object, the technical solution adopted by the present invention is:
一种视觉多特征引导的建筑垃圾精细分类方法,包括以下步骤;A method for fine classification of construction waste guided by visual multi-features, comprising the following steps;
步骤一、通过视觉传感器采集建筑垃圾目标样本图像,并将其二值化;Step 1: Collect the target sample image of construction waste through the visual sensor and binarize it;
步骤二、基于图像的全局信息,提取样本的颜色特征并编码构成 Step 2: Based on the global information of the image, extract the color features of the sample and encode them into
步骤三、基于图像的局部信息,提取样本的纹理特征并编码构成 Step 3: Extract the texture features of the sample and encode them based on the local information of the image
步骤四、构建建筑物料的颜色-纹理统计特征 Step 4: Construct color-texture statistical features of building materials
步骤五、将和/>特征分别输入分类器,训练分类模型。测试图像经分类模型后输出两个证据矩阵/> Step 5: and/> The features are input into the classifier to train the classification model. After the test image is classified by the model, two evidence matrices are output.
步骤六、根据步骤五中证据设计决策模型,输出决策矩阵/> Step 6: Based on the evidence in step 5 Design decision model and output decision matrix/>
步骤七、根据决策矩阵中概率最大者的类别确定待分拣物料的标签。Step 7: According to the decision matrix The category with the highest probability determines the label of the material to be sorted.
所述步骤二中提取样本颜色特征并编码的方法为:The method for extracting sample color features and encoding in step 2 is:
首先,将视觉传感器采集的RGB图像转换到HSV颜色空间,其中H表示色调,S表示饱和度,V表示亮度,提取图像H通道信息以表示物体的颜色特征,公式如下;First, the RGB image collected by the visual sensor is converted to the HSV color space, where H represents hue, S represents saturation, and V represents brightness. The image H channel information is extracted to represent the color characteristics of the object. The formula is as follows;
式中,max和min满足:max=max(R,G,B),min=min(R,G,B);Wherein, max and min satisfy: max = max(R, G, B), min = min(R, G, B);
然后,对H通道信息进行直方图统计,得到颜色特征 Then, the H channel information is histogram counted to obtain the color feature
所述步骤三中提取样本图像的纹理特征并编码的方法为:The method for extracting and encoding the texture features of the sample image in step 3 is:
步骤3.1、提取局部纹理特征:Step 3.1, extract local texture features:
首先,将步骤一中的二值图像划分为若干个含有32×32像素的子区域,确定细胞的大小为16×16的像素,块的大小为2×2的细胞,提取像素(x,y)四邻域像素灰度值N(x,y+1)、N(x,y-1)、N(x+1,y)、N(x-1,y);First, the binary image in step 1 is divided into several sub-regions containing 32×32 pixels, the cell size is determined to be 16×16 pixels, the block size is 2×2 cells, and the grayscale values of the four neighborhood pixels of the pixel (x, y) are extracted: N(x, y+1), N(x, y-1), N(x+1, y), and N(x-1, y);
然后,通过以下公式计算每个像素的梯度幅值和梯度方向;Then, the gradient magnitude and gradient direction of each pixel are calculated by the following formula;
最后,级联块中细胞的方向梯度直方图,并使用1×36维的向量表示每个子区域的无符号梯度方向的分布;Finally, the directional gradient histograms of the cells in the cascade block are concatenated, and a 1×36-dimensional vector is used to represent the distribution of the unsigned gradient direction of each sub-region;
式中,my和mx分别是像素(x,y)的垂直梯度和水平梯度。Where my and mx are the vertical and horizontal gradients of pixel (x, y), respectively.
步骤3.2、构建视觉特征单词:Step 3.2: Construct visual feature words:
根据步骤3.1提取每类图像的局部纹理特征,然后,通过K-均值聚类算法将每类图像的向量集聚成K类,并将聚类中心视为视觉单词;Extract local texture features of each type of image according to step 3.1, then cluster the vectors of each type of image into K clusters using the K-means clustering algorithm, and regard the cluster centers as visual words;
步骤3.3、纹理特征编码:Step 3.3, texture feature encoding:
将步骤3.2中全部类别的视觉单词构成视觉词袋,对于步骤3.1计算得到的向量,根据最近邻原则查找视觉单词,以对纹理特征编码,得到 The visual words of all categories in step 3.2 are used to form a visual word bag. For the vector calculated in step 3.1, the visual word is searched according to the nearest neighbor principle to encode the texture feature, and the result is
所述步骤四具体为:构造建筑物料颜色-纹理统计特征的公式如下:The fourth step is specifically to construct the color-texture statistical features of building materials. The formula is as follows:
式中,分别是颜色特征/>纹理特征/>的融合权重,[·]T是转置矩阵,其中/> In the formula, They are color features/> Texture features/> The fusion weights, [·] T is the transposed matrix, where />
所述步骤六中根据步骤五中分类器输出的分类概率矩阵设计分类决策模型,输出决策矩阵/>的方法为:In step 6, the classification probability matrix output by the classifier in step 5 is Design a classification decision model and output a decision matrix/> The method is:
通过颜色特征将建筑物料分为颜色显著类(包括红砖和木块)和其它(包括泡沫、硬塑料、混凝土),因此是一个二维矩阵,引入分配矩阵/>对矩阵/>中的分类概率进行重新分配,引入线性融合决策模型,将分类概率/>与/>进行融合,具体公式如下:Building materials are divided into color-significant categories (including red bricks and wood blocks) and others (including foam, hard plastic, and concrete) by color characteristics. Is a two-dimensional matrix, introducing the allocation matrix/> For the matrix/> Redistribute the classification probabilities in the input and introduce a linear fusion decision model to distribute the classification probabilities/> With/> The specific formula for fusion is as follows:
式中,ω1与ω2是矩阵和/>的权重,表示分类结果对来自两个特征分类空间的信任度;/>是融合分类概率矩阵,用于判断物料的最终类别,最大概率所对应的建筑垃圾类别即为最终结果;Where ω 1 and ω 2 are matrices and/> The weight of represents the confidence of the classification result from the two feature classification spaces;/> It is a fusion classification probability matrix, which is used to determine the final category of the material. The category of construction waste corresponding to the maximum probability is the final result.
当ω1∈(0,1)时,称该算法为视觉多特征引导的建筑垃圾联合决策分类算法(Visual multi-feature guided construction waste classification algorithmbased on joint decision model,VMF-J),取ω1=0.5为默认值,当ω1=0时,最终分类结果完全以颜色-纹理统计特征的分类概率/>为依据,此时该算法被称为视觉多特征引导的建筑垃圾精细分类算法(Visual multi-feature guided construction wasteclassification algorithm,VMF),当ω1=1时,即仅用颜色特征对目标进行分类,由于/>满足平均分配原则,此时无法得到精确的类别标签。When ω 1 ∈(0,1), the algorithm is called Visual multi-feature guided construction waste classification algorithmbased on joint decision model (VMF-J), and ω 1 = 0.5 is taken as the default value. When ω 1 = 0, the final classification result is completely based on color-texture statistical features. The classification probability of Based on this, the algorithm is called Visual multi-feature guided construction waste classification algorithm (VMF). When ω 1 = 1, only color features are used to classify the target. The average distribution principle is satisfied, but accurate category labels cannot be obtained at this time.
根据决策矩阵中概率最大者的类别确定待分拣物料的最终类别。According to the decision matrix The category with the highest probability determines the final category of the material to be sorted.
本发明的有益效果:Beneficial effects of the present invention:
本发明通过机器视觉技术对多种类物料进行精确分类,为市场多样化需求提供解决方案;通过从目标的局部和全局多角度提取特征,构建了新的颜色-纹理统计特征,促进了物料信息的全面描述;通过建立高效的分类决策模型,有效的提高了物料的识别准确率;通过对目标显著特征的深入分析、智能提取,促进了物料的快速识别,实现了建筑物料自动化、智能化分拣,降低了建筑物料再利用的成本。The present invention uses machine vision technology to accurately classify various types of materials, providing solutions to the diversified needs of the market; by extracting features from local and global multi-angles of the target, a new color-texture statistical feature is constructed, which promotes the comprehensive description of material information; by establishing an efficient classification decision model, the recognition accuracy of materials is effectively improved; through in-depth analysis and intelligent extraction of significant features of the target, the rapid recognition of materials is promoted, the automated and intelligent sorting of building materials is realized, and the cost of recycling building materials is reduced.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为部分样本集示意图。Figure 1 is a schematic diagram of some sample sets.
图2为构建颜色-纹理统计特征流程图。Figure 2 shows the construction of color-texture statistical features flow chart.
图3为ω1取值对分类准确率影响示意图。Figure 3 is a schematic diagram showing the effect of ω1 value on classification accuracy.
图4为本发明流程示意图。FIG. 4 is a schematic diagram of the process of the present invention.
具体实施方式Detailed ways
下面结合实施例对本发明作进一步详细说明。The present invention is further described in detail below in conjunction with the embodiments.
如图1-图4所示:本发明公开一种视觉多特征引导的建筑垃圾精细分类方法,结合图2,具体包括以下步骤:As shown in Figures 1 to 4: The present invention discloses a method for fine classification of construction waste guided by visual multi-features, which specifically includes the following steps in conjunction with Figure 2:
步骤一、通过视觉传感器采集我国建筑垃圾中占比较大的五种物料,如图1所示,并对采集到的数据预处理。然后,将每类图像分成5份,任选4份作为训练集,剩余图像视为测试集。Step 1: Use visual sensors to collect the five materials that account for the largest proportion of construction waste in my country, as shown in Figure 1, and pre-process the collected data. Then, divide each type of image into 5 parts, select 4 parts as training sets, and the remaining images are regarded as test sets.
步骤二、基于图像的全局信息,提取样本的颜色特征并编码的具体方法为:Step 2: Based on the global information of the image, the specific method of extracting the color features of the sample and encoding them is:
首先,将图像从RGB颜色空间转换为HSV颜色空间,并提取H通道信息。然后,对H通道进行直方图统计,得到颜色特征 First, the image is converted from RGB color space to HSV color space and the H channel information is extracted. Then, the H channel is histogram counted to obtain the color feature
步骤三、基于图像的局部信息,提取样本的纹理特征并编码的具体方法为:Step 3: Based on the local information of the image, the specific method of extracting the texture features of the sample and encoding them is:
步骤3.1、提取图像纹理特征。首先,将RGB图像二值化。然后,将二值图像划分为若干个含有32×32像素的子区域。确定细胞的大小为16×16的像素,块的大小为2×2的细胞,提取像素(x,y)四邻域像素灰度值N(x,y+1)、N(x,y-1)、N(x+1,y)、N(x-1,y)。然后,通过以下公式计算每个像素的梯度幅值和梯度方向。最后,级联块中细胞的方向梯度直方图,并使用1×36维的向量表示每个子区域的无符号梯度方向的分布。Step 3.1, extract image texture features. First, binarize the RGB image. Then, divide the binary image into several sub-regions containing 32×32 pixels. Determine the cell size of 16×16 pixels and the block size of 2×2 cells, and extract the grayscale values of the four neighborhood pixels of the pixel (x, y): N(x, y+1), N(x, y-1), N(x+1, y), and N(x-1, y). Then, calculate the gradient amplitude and gradient direction of each pixel using the following formula. Finally, concatenate the directional gradient histograms of the cells in the block, and use a 1×36-dimensional vector to represent the distribution of the unsigned gradient direction of each sub-region.
式中,my和mx分别是像素(x,y)的垂直梯度和水平梯度。Where my and mx are the vertical and horizontal gradients of pixel (x, y), respectively.
步骤3.2、构建视觉特征词袋。根据步骤3.1提取每类图像的局部纹理特征。然后,通过K-均值聚类算法将每类图像的向量集聚成K类,并将聚类中心视为视觉单词。最终,全部类别的视觉单词构成视觉词袋。Step 3.2: Construct a visual feature word bag. Extract local texture features of each type of image according to step 3.1. Then, cluster the vectors of each type of image into K categories using the K-means clustering algorithm, and regard the cluster centers as visual words. Finally, all categories of visual words form a visual word bag.
步骤3.3、纹理特征编码。对于步骤3.1计算得到的向量,根据最近邻原则在词袋中查找视觉单词,并对纹理特征编码,构成 Step 3.3: Texture feature encoding. For the vector calculated in step 3.1, search for visual words in the bag of words according to the nearest neighbor principle and encode the texture features to form
步骤四、将颜色特征与纹理特征/>融合,构造建筑物料颜色-纹理统计特征具体方法为:Step 4: Color Features and texture features/> Fusion, construction of building material color-texture statistical features The specific method is:
对于样本空间S中任意图像将其颜色特征与纹理特征融合构造一个新的颜色-纹理统计特征/>公式如下:For any image in the sample space S The color feature and texture feature are combined to construct a new color-texture statistical feature/> The formula is as follows:
式中分别是样本图像/>的颜色特征/>纹理特征/>的融合权重,[·]T是转置矩阵,其中/> In the formula They are sample images/> Color characteristics/> Texture features/> The fusion weights, [·] T is the transposed matrix, where/>
步骤五、将和/>特征分别输入分类器,训练分类模型,获得两个来自不同特征空间的分类概率矩阵/> Step 5: and/> The features are input into the classifier respectively, the classification model is trained, and two classification probability matrices from different feature spaces are obtained.
步骤六、根据步骤五中获得的矩阵引入线性融合决策模型,将分类概率矩阵/>与/>进行融合,其结果为/>具体方法如下:Step 6: According to the matrix obtained in step 5 Introduce the linear fusion decision model and transform the classification probability matrix/> With/> The result of fusion is / > The specific method is as follows:
步骤6.1、设计分类决策模型。由于仅依据颜色特征可以有效的将建筑物料分为颜色显著类(包括红砖和木块)和其它(包括泡沫、硬塑料、混凝土),因此是一个二维矩阵。为了提高算法分类准确度,引入分配矩阵/>对矩阵/>中的分类概率进行重新分配,引入线性融合决策模型,将分类概率/>与/>进行融合,具体公式如下:Step 6.1, design a classification decision model. Since the color feature alone can effectively classify building materials into color-significant categories (including red bricks and wood blocks) and others (including foam, hard plastic, and concrete), Is a two-dimensional matrix. In order to improve the classification accuracy of the algorithm, the distribution matrix is introduced. For the matrix/> Redistribute the classification probabilities in the input and introduce a linear fusion decision model to distribute the classification probabilities/> With/> The specific formula for fusion is as follows:
其中,ω1与ω2是矩阵和/>的权重,表示决策模型对来自两个空间的证据信任度;/>是融合分类概率矩阵,用于判断物料的最终类别,最大概率所对应的建筑垃圾类别即为最终结果。当ω1=0时,最终分类结果完全以颜色-纹理统计特征/>的分类概率为依据,此时称该算法为视觉多特征引导的建筑垃圾精细分类算法(Visual multi-feature guided construction waste classification algorithm,VMF)。当ω1=1时,即用颜色特征对目标进行分类,由于/>满足平均分配原则,因此无法得到精确的类别标签。Among them, ω 1 and ω 2 are matrices and/> The weight of represents the confidence of the decision model in the evidence from the two spaces; /> is the fusion classification probability matrix, which is used to determine the final category of the material. The category of construction waste corresponding to the maximum probability is the final result. When ω 1 = 0, the final classification result is completely based on color-texture statistical features/> The classification probability As a basis, the algorithm is called Visual multi-feature guided construction waste classification algorithm (VMF). When ω 1 = 1, the color feature is used to classify the target. The average distribution principle is satisfied, so it is impossible to obtain accurate category labels.
步骤6.2、分类决策模型的参数调优。ω1对分类准确率的影响如图4所示。随着ω1的增加,识别率呈现两种趋势:一、先递增后递减,如图3(3)所示;二、先保持不变后递减,如图3(1)-(2)所示。当ω1∈(0,1)时,称该算法为视觉多特征引导的建筑垃圾联合决策分类算法,(Visual multi-feature guided construction waste classification algorithmbased on joint decision model,VMF-J)。实验证明,当ω1=0.5时,该模型的分类准确率达到最优。因此,本发明取ω1=0.5为默认值。Step 6.2, parameter tuning of the classification decision model. The effect of ω 1 on the classification accuracy is shown in Figure 4. As ω 1 increases, the recognition rate shows two trends: first, it increases first and then decreases, as shown in Figure 3 (3); second, it remains unchanged first and then decreases, as shown in Figure 3 (1)-(2). When ω 1 ∈ (0,1), the algorithm is called a visual multi-feature guided construction waste classification algorithm based on joint decision model (VMF-J). Experiments have shown that when ω 1 = 0.5, the classification accuracy of the model reaches the optimal value. Therefore, the present invention takes ω 1 = 0.5 as the default value.
步骤七、根据决策矩阵中概率最大者的类别确定待分拣物料的最终类别。Step 7: According to the decision matrix The category with the highest probability determines the final category of the material to be sorted.
针对视觉多特征引导的建筑垃圾精细分类算法的有效性,本文将VMF和VMF-J算法在三种不同的常用分类器(SVM、RF、KNN)中进行分类测试,实验结果如表1、表2、表3所示。此外本文将VMF和VMF-J算法与Lab、RGB、HSV阈值分割算法进行比较,实验结果如表4所示。In view of the effectiveness of the fine classification algorithm of construction waste guided by visual multi-features, this paper tests the VMF and VMF-J algorithms in three different commonly used classifiers (SVM, RF, KNN), and the experimental results are shown in Table 1, Table 2, and Table 3. In addition, this paper compares the VMF and VMF-J algorithms with the Lab, RGB, and HSV threshold segmentation algorithms, and the experimental results are shown in Table 4.
表1 SVM识别建筑垃圾混淆矩阵Table 1 Confusion matrix of SVM in identifying construction waste
表2 KNN识别建筑垃圾混淆矩阵Table 2 Confusion matrix of KNN recognition of construction waste
表3 RF识别建筑垃圾混淆矩阵Table 3 Confusion matrix of RF recognition of construction waste
表4算法识别效果对比Table 4 Comparison of algorithm recognition effects
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111071050.5A CN113743523B (en) | 2021-09-13 | 2021-09-13 | Building rubbish fine classification method guided by visual multi-feature |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111071050.5A CN113743523B (en) | 2021-09-13 | 2021-09-13 | Building rubbish fine classification method guided by visual multi-feature |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113743523A CN113743523A (en) | 2021-12-03 |
| CN113743523B true CN113743523B (en) | 2024-05-14 |
Family
ID=78738418
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111071050.5A Active CN113743523B (en) | 2021-09-13 | 2021-09-13 | Building rubbish fine classification method guided by visual multi-feature |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113743523B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114580569B (en) * | 2022-03-31 | 2025-04-15 | 西安建筑科技大学 | A visual recognition method for construction waste materials based on feature coding fusion |
| CN115410050B (en) * | 2022-11-02 | 2023-02-03 | 杭州华得森生物技术有限公司 | Tumor cell detection equipment based on machine vision and method thereof |
| CN116051912B (en) * | 2023-03-30 | 2023-06-16 | 深圳市衡骏环保科技有限公司 | Intelligent identification and classification method for decoration garbage |
| CN118675158B (en) * | 2024-07-11 | 2025-04-18 | 深圳市悦盛环保科技有限公司 | A construction waste identification and recycling method, device, equipment and storage medium |
| CN120451717B (en) * | 2025-04-21 | 2025-10-03 | 广西青辉环保技术有限责任公司 | Construction waste multisource information fusion intelligent sorting method and system |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102622607A (en) * | 2012-02-24 | 2012-08-01 | 河海大学 | Remote sensing image classification method based on multi-feature fusion |
| CN106778810A (en) * | 2016-11-23 | 2017-05-31 | 北京联合大学 | Original image layer fusion method and system based on RGB feature Yu depth characteristic |
| CN107886095A (en) * | 2016-09-29 | 2018-04-06 | 河南农业大学 | A kind of classifying identification method merged based on machine vision and olfactory characteristic |
| CN110880019A (en) * | 2019-10-30 | 2020-03-13 | 北京中科研究院 | Methods for training target domain classification models via unsupervised domain adaptation |
| CN111104943A (en) * | 2019-12-17 | 2020-05-05 | 西安电子科技大学 | Color image region-of-interest extraction method based on decision-level fusion |
| CN111401485A (en) * | 2020-06-04 | 2020-07-10 | 深圳新视智科技术有限公司 | Practical texture classification method |
| CN112488050A (en) * | 2020-12-16 | 2021-03-12 | 安徽大学 | Color and texture combined aerial image scene classification method and system |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6751354B2 (en) * | 1999-03-11 | 2004-06-15 | Fuji Xerox Co., Ltd | Methods and apparatuses for video segmentation, classification, and retrieval using image class statistical models |
| EP3008666B1 (en) * | 2013-06-13 | 2019-11-20 | Sicpa Holding SA | Image based object classification |
-
2021
- 2021-09-13 CN CN202111071050.5A patent/CN113743523B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102622607A (en) * | 2012-02-24 | 2012-08-01 | 河海大学 | Remote sensing image classification method based on multi-feature fusion |
| CN107886095A (en) * | 2016-09-29 | 2018-04-06 | 河南农业大学 | A kind of classifying identification method merged based on machine vision and olfactory characteristic |
| CN106778810A (en) * | 2016-11-23 | 2017-05-31 | 北京联合大学 | Original image layer fusion method and system based on RGB feature Yu depth characteristic |
| CN110880019A (en) * | 2019-10-30 | 2020-03-13 | 北京中科研究院 | Methods for training target domain classification models via unsupervised domain adaptation |
| CN111104943A (en) * | 2019-12-17 | 2020-05-05 | 西安电子科技大学 | Color image region-of-interest extraction method based on decision-level fusion |
| CN111401485A (en) * | 2020-06-04 | 2020-07-10 | 深圳新视智科技术有限公司 | Practical texture classification method |
| CN112488050A (en) * | 2020-12-16 | 2021-03-12 | 安徽大学 | Color and texture combined aerial image scene classification method and system |
Non-Patent Citations (2)
| Title |
|---|
| 兼顾特征级和决策级融合的场景分类;何刚;霍宏;方涛;;计算机应用;20160531(第05期);第1262-1266页 * |
| 基于计算机视觉的废物垃圾分析与识别探讨;万梅芬;电脑知识与技术;20200831;第16卷(第24期);第189-190页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113743523A (en) | 2021-12-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113743523B (en) | Building rubbish fine classification method guided by visual multi-feature | |
| CN114580569B (en) | A visual recognition method for construction waste materials based on feature coding fusion | |
| CN107169953B (en) | Detection method of bridge concrete surface cracks based on HOG feature | |
| CN102968637B (en) | Complicated background image and character division method | |
| CN101957920B (en) | License plate search method based on digital video | |
| Salem | Segmentation of white blood cells from microscopic images using K-means clustering | |
| CN108520278A (en) | A Detection Method and Evaluation Method for Pavement Cracks Based on Random Forest | |
| CN105321176A (en) | Image segmentation method based on hierarchical higher order conditional random field | |
| CN108596038B (en) | Method for identifying red blood cells in excrement by combining morphological segmentation and neural network | |
| CN111401426A (en) | Small sample hyperspectral image classification method based on pseudo label learning | |
| CN105427309A (en) | Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information | |
| CN106709530A (en) | License plate recognition method based on video | |
| CN109213886B (en) | Image retrieval method and system based on image segmentation and fuzzy pattern recognition | |
| CN105426924B (en) | A kind of scene classification method based on image middle level features | |
| CN104850854A (en) | Talc ore product sorting processing method and talc ore product sorting system | |
| CN102509109B (en) | Method for distinguishing Thangka image from non-Thangka image | |
| CN110738672A (en) | image segmentation method based on hierarchical high-order conditional random field | |
| CN103985130A (en) | Image significance analysis method for complex texture images | |
| CN108090485A (en) | Display foreground extraction method based on various visual angles fusion | |
| CN106844785A (en) | Saliency segmentation-based content-based image retrieval method | |
| CN109145964B (en) | A method and system for realizing image color clustering | |
| CN107622280B (en) | Modular image saliency detection method based on scene classification | |
| CN114373079A (en) | A Fast and Accurate Ground Penetrating Radar Target Detection Method | |
| CN108073940B (en) | Method for detecting 3D target example object in unstructured environment | |
| Song et al. | A new method of construction waste classification based on two-level fusion |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20250312 Address after: Room 1101, 11th Floor, Jianke Building, No. 99 Yanta Road, Beilin District, Xi'an City, Shaanxi Province 710055 Patentee after: Xi'an Jiankong Intelligent Perception Operation and Maintenance Co.,Ltd. Country or region after: China Address before: 710055 No. 13, Yanta Road, Shaanxi, Xi'an Patentee before: XIAN University OF ARCHITECTURE AND TECHNOLOG Country or region before: China |
|
| TR01 | Transfer of patent right |