[go: up one dir, main page]

CN109815815B - A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration - Google Patents

A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration Download PDF

Info

Publication number
CN109815815B
CN109815815B CN201811576219.0A CN201811576219A CN109815815B CN 109815815 B CN109815815 B CN 109815815B CN 201811576219 A CN201811576219 A CN 201811576219A CN 109815815 B CN109815815 B CN 109815815B
Authority
CN
China
Prior art keywords
pedestrian
support vector
vector machine
label information
metric learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811576219.0A
Other languages
Chinese (zh)
Other versions
CN109815815A (en
Inventor
李华锋
赵丹丹
王红斌
余正涛
线岩团
文永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201811576219.0A priority Critical patent/CN109815815B/en
Publication of CN109815815A publication Critical patent/CN109815815A/en
Application granted granted Critical
Publication of CN109815815B publication Critical patent/CN109815815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及一种基于度量学习和支持向量机相集成的行人再识别方法,属于图像处理、模式识别技术领域。本发明首先生成带有行人标签信息的行人特征矩阵;对度量行人距离的非线性空间M做处理;设置支持向量机内用到的行人标签信息;支持向量机引入约束变量,再把支持向量机作为非线性空间的约束条件;对非线性空间M的约束条件进行缩放处理;找到投影矩阵和分类器的最优解,用度量学习和支持向量机相集成的识别模型进行行人识别,得到识别率。本发明集成了度量学习和支持向量机。和已有方法相比,本发明所提出的方法有效的挖掘、利用了行人数据集中的标签信息,使行人匹配率得到有效的提升。

Figure 201811576219

The invention relates to a pedestrian re-identification method based on the integration of metric learning and support vector machine, and belongs to the technical fields of image processing and pattern recognition. The invention first generates a pedestrian feature matrix with pedestrian label information; processes the nonlinear space M that measures the distance of pedestrians; sets the pedestrian label information used in the support vector machine; the support vector machine introduces constraint variables, and then the support vector machine As the constraint condition of nonlinear space; scale the constraint condition of nonlinear space M; find the optimal solution of projection matrix and classifier, use the recognition model integrated with metric learning and support vector machine for pedestrian recognition, and get the recognition rate . The present invention integrates metric learning and support vector machine. Compared with the existing methods, the method proposed in the present invention effectively mines and utilizes the label information in the pedestrian data set, so that the pedestrian matching rate is effectively improved.

Figure 201811576219

Description

一种基于度量学习和支持向量机相集成的行人再识别方法A pedestrian re-identification method based on metric learning and support vector machine integration

技术领域technical field

本发明涉及一种基于度量学习和支持向量机相集成的行人再识别方法,属于图像处理、模式识别技术领域。The invention relates to a pedestrian re-identification method based on the integration of metric learning and support vector machine, and belongs to the technical fields of image processing and pattern recognition.

背景技术Background technique

随着国家智慧城市、平安城市的推广,视频监控系统基本覆盖我国主要城市。视频监控系统保存下来的行人信息数据庞大,人工处理信息效率低成本大,行人再识别技术可以有效的提高工作效率、节省资源。行人再识别主要任务是匹配来自非重叠摄像机下的行人是否为同一个行人。该技术是从a摄像机下抽出一个行人作为目标行人,判断在b摄像机下是否出现过该行人,如果出现则找到该目标行人。因此,行人再识别技术在受到广大研究者的极大关注。With the promotion of national smart cities and safe cities, video surveillance systems basically cover major cities in my country. The pedestrian information data saved by the video surveillance system is huge, and the manual processing of information is efficient and low-cost. The pedestrian re-identification technology can effectively improve work efficiency and save resources. The main task of pedestrian re-identification is to match whether pedestrians from non-overlapping cameras are the same pedestrian. The technology is to extract a pedestrian from the camera a as the target pedestrian, determine whether the pedestrian has appeared under the camera b, and find the target pedestrian if it does. Therefore, pedestrian re-identification technology has received great attention from researchers.

目前,虽然具有较强鲁棒性的行人再识别引起了研究者的重视,并提出了一些可行的解决方案,但实验效果仍不能满足现实的需要,尤其是当行人刻意去更改自己外在特征表现时。现有的行人再识别技术仍然存在由于视角与光照的变化以及摄像头参数设置的不同引起同一行人在不同的视觉、不同的光照条件和不同的摄像头下表现出不同的底层视觉特征,而不同行人表现出相似的视觉特征,从而导致行人再识别技术远远不能满足现实应用的需要。At present, although pedestrian re-identification with strong robustness has attracted the attention of researchers and proposed some feasible solutions, the experimental results still cannot meet the actual needs, especially when pedestrians deliberately change their external characteristics when performing. The existing pedestrian re-identification technology still exists due to changes in viewing angle and illumination and different camera parameter settings, which cause the same pedestrian to show different underlying visual features under different vision, different lighting conditions and different cameras, but different pedestrians. Therefore, the pedestrian re-identification technology is far from meeting the needs of practical applications.

行人再识别研究成果主要有基于特征的行人再识别和基于度量学习的行人再识别两大类,基于特征的行人在识别是从行人底层特征出发提取出判别能力和表达能力都更强的特征。基于度量学习的行人再识别是从度量学习角度出发寻找匹配效果更好的算法。基于特征的行人再识别方法利用行人的标签信息提取行人底层特征,但是受光照变化剧烈和摄像机参数设置不同等因素的影响,行人的颜色和纹理等底层特征变化很大,导致行人识别准确率低。然而,基于度量学习的行人再识别方法考虑到光照变化剧烈、摄像机参数设置不同和行人外貌等问题,利用不同视角下同一个行人之间成对出现的标签信息来缩小同一行人之间的度量距离。但是在使用行人标签信息时只考虑了不同视角下同一个行人的标签信息,并没有考虑不同视角下不同行人之间的标签信息。针对基于度量学习的行人再识别算法在行人衣着服饰、姿态相似等复杂背景的干扰因素下,行人再识别算法识别效果差,鲁棒性不强的问题。Pedestrian re-identification research results mainly include feature-based pedestrian re-identification and metric learning-based pedestrian re-identification. The feature-based pedestrian recognition is to extract features with stronger discriminative and expressive capabilities from the underlying features of pedestrians. Pedestrian re-identification based on metric learning is to find an algorithm with better matching effect from the perspective of metric learning. The feature-based pedestrian re-identification method uses the label information of pedestrians to extract the underlying features of pedestrians. However, due to factors such as drastic changes in illumination and different camera parameter settings, the underlying features such as pedestrian colors and textures change greatly, resulting in low pedestrian recognition accuracy. . However, the pedestrian re-identification method based on metric learning takes into account the drastic changes in illumination, different camera parameter settings and pedestrian appearance, etc., and uses the paired label information between the same pedestrian from different perspectives to narrow the metric distance between the same pedestrians. . However, when using the pedestrian label information, only the label information of the same pedestrian from different perspectives is considered, and the label information between different pedestrians from different perspectives is not considered. Aiming at the problem that the pedestrian re-identification algorithm based on metric learning has poor recognition effect and weak robustness under the interference factors of complex backgrounds such as pedestrians' clothes and similar postures.

发明内容SUMMARY OF THE INVENTION

本发明提供了一种基于度量学习和支持向量机相集成的行人再识别方法,挖掘并充分利用不同视角下行人之间的标签信息来提高行人识别的准确率。The invention provides a pedestrian re-identification method based on the integration of metric learning and support vector machine, which mines and makes full use of label information between pedestrians from different perspectives to improve the accuracy of pedestrian recognition.

本发明的技术方案是:一种基于度量学习和支持向量机相集成的行人再识别方法,首先生成带有行人标签信息的行人特征矩阵;对度量行人距离的非线性空间M做处理;设置支持向量机内用到的行人标签信息;支持向量机引入约束变量,再把支持向量机作为非线性空间的约束条件;对非线性空间M的约束条件进行缩放处理;找到投影矩阵和分类器的最优解,用度量学习和支持向量机相集成的识别模型进行行人识别,得到识别率。The technical scheme of the present invention is: a pedestrian re-identification method based on the integration of metric learning and support vector machine, firstly generating a pedestrian feature matrix with pedestrian label information; processing the nonlinear space M for measuring pedestrian distance; setting support The pedestrian label information used in the vector machine; the support vector machine introduces the constraint variable, and then the support vector machine is used as the constraint condition of the nonlinear space; the constraint condition of the nonlinear space M is scaled; The optimal solution is to use the recognition model integrated with metric learning and support vector machine for pedestrian recognition to obtain the recognition rate.

进一步地,所述基于度量学习和支持向量机相集成的行人再识别方法的具体步骤如下:Further, the specific steps of the pedestrian re-identification method based on metric learning and support vector machine integration are as follows:

Stpe1、把a,b视角下行人的特征全部投影到同一个非线性空间M∈Rm×n内,利用公式

Figure BDA0001916852160000021
找出在a视角下与b视角下最相似但不是自己的行人
Figure BDA0001916852160000022
来生成带有行人标签信息的行人特征矩阵xc;Step1. Project all the features of pedestrians in a and b perspectives into the same nonlinear space M∈R m×n , using the formula
Figure BDA0001916852160000021
Find the pedestrian who is most similar to the person in the perspective a and the perspective b but is not yourself
Figure BDA0001916852160000022
to generate a pedestrian feature matrix x c with pedestrian label information;

Stpe2、在非线性空间M内,要求每一个a视角下的第i个人

Figure BDA0001916852160000023
与在b视角下与
Figure BDA0001916852160000024
最相似的但不是同一个人的行人
Figure BDA0001916852160000025
之间的度量距离要小于a视角下的第i个人
Figure BDA0001916852160000026
与它自己在b视角下
Figure BDA0001916852160000027
之间的度量距离,即Stpe2. In the nonlinear space M, the ith person under each a view angle is required
Figure BDA0001916852160000023
and in view b and
Figure BDA0001916852160000024
Pedestrians who are most similar but not the same person
Figure BDA0001916852160000025
The metric distance between them is smaller than the ith person under a view
Figure BDA0001916852160000026
with itself in b's perspective
Figure BDA0001916852160000027
the metric distance between

Figure BDA0001916852160000028
Figure BDA0001916852160000028

Stpe3、如果所有的行人特征都按Stpe2处理,投影后的特征本身就满足上述情况就会出现过拟合情况,所以如果投影后的特征本身就满足同一个行人穿着差异巨大则按Step2处理,如果不同行人穿着相似则不做任何处理,则

Figure BDA0001916852160000029
取0;即Stpe3. If all pedestrian features are processed according to Stpe2, the projected feature itself will meet the above situation, and overfitting will occur. Therefore, if the projected feature itself satisfies the huge difference in the wearing of the same pedestrian, then it will be processed according to Step2. If If different pedestrians wear similar clothes, no action will be taken, then
Figure BDA0001916852160000029
take 0; that is

Figure BDA00019168521600000210
Figure BDA00019168521600000210

Stpe4、行人的标签信息的设置方式为:如果

Figure BDA0001916852160000031
Figure BDA0001916852160000032
是同一人,则设置它们的标签信息yij为1,如果不是则为-1;即Stpe4, the setting method of the pedestrian's label information is: if
Figure BDA0001916852160000031
and
Figure BDA0001916852160000032
are the same person, set their label information y ij to 1, or -1 if they are not; i.e.

Figure BDA0001916852160000033
Figure BDA0001916852160000033

其中,

Figure BDA0001916852160000034
表示在支持向量机中
Figure BDA0001916852160000035
行人特征之间的度量距离;in,
Figure BDA0001916852160000034
represented in a support vector machine
Figure BDA0001916852160000035
the metric distance between pedestrian features;

Stpe5、传统支持向量机引入一个约束变量ξij即为不等式

Figure BDA0001916852160000036
其中,w为支持向量机的分类器;再把非线性空间M的约束条件引入支持向量机中,即:Stpe5, the traditional support vector machine introduces a constraint variable ξ ij which is the inequality
Figure BDA0001916852160000036
Among them, w is the classifier of the support vector machine; then the constraints of the nonlinear space M are introduced into the support vector machine, namely:

Figure BDA0001916852160000037
Figure BDA0001916852160000037

s.t(yij(w(Mxai-Mxbj)+c)>1-ξij)st(y ij (w(Mx ai -Mx bj )+c)>1-ξ ij )

Stpe6、把非线性空间M的约束条件(yij(w(Mxai-Mxbj)+c)>1-ξij)进行适当的缩放处理,将条件约束松弛优化后将ξij消去,即Step 6. Perform appropriate scaling processing on the constraint condition (y ij (w(Mx ai -Mx bj )+c)>1-ξ ij ) of the nonlinear space M, and eliminate ξ ij after the conditional constraints are relaxed and optimized, that is,

Figure BDA0001916852160000038
Figure BDA0001916852160000038

Stpe7、找到Stpe6公式的最优解,训练学习得到投影矩阵和分类器w;再用投影矩阵和分类器w在度量学习和支持向量机相集成的识别模型中进行行人识别,得到识别率s,其中得到识别率的度量学习和支持向量机相集成的识别模型如下:Stpe7, find the optimal solution of the Stpe6 formula, train and learn to get the projection matrix and the classifier w; then use the projection matrix and the classifier w to perform pedestrian recognition in the recognition model integrated with metric learning and support vector machine, and get the recognition rate s, The metric learning of the recognition rate and the recognition model integrated with the support vector machine are as follows:

Figure BDA0001916852160000039
Figure BDA0001916852160000039

其中,c的作用是限制分类相似值的范围。Among them, the role of c is to limit the range of categorical similarity values.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明集成了度量学习和支持向量机。和已有方法相比,本发明所提出的方法有效的挖掘、利用了行人数据集中的标签信息,使行人匹配率得到有效的提升。The present invention integrates metric learning and support vector machine. Compared with the existing methods, the method proposed in the present invention effectively mines and utilizes the label information in the pedestrian data set, so that the pedestrian matching rate is effectively improved.

附图说明Description of drawings

图1是本发明流程框图;其中,y是表示支持向量机里面用到的行人标签信息矩阵;Fig. 1 is a flowchart of the present invention; wherein, y represents the pedestrian label information matrix used in the support vector machine;

图2是度量学习模型和度量学习与支持向量机相集成的模型分别在四个数据集上的对比实验结果。Figure 2 shows the comparative experimental results of the metric learning model and the model integrating metric learning and support vector machine on four datasets respectively.

具体实施方式Detailed ways

实施例1:如图1-2所示,一种基于度量学习和支持向量机相集成的行人再识别方法,首先生成带有行人标签信息的行人特征矩阵;对度量行人距离的非线性空间M做处理;设置支持向量机内用到的行人标签信息;支持向量机引入约束变量,再把支持向量机作为非线性空间的约束条件;对非线性空间M的约束条件进行缩放处理;找到投影矩阵和分类器的最优解,用度量学习和支持向量机相集成的识别模型进行行人识别,得到识别率。Embodiment 1: As shown in Figure 1-2, a pedestrian re-identification method based on the integration of metric learning and support vector machine, firstly generates a pedestrian feature matrix with pedestrian label information; do processing; set the pedestrian label information used in the support vector machine; introduce the constraint variable into the support vector machine, and then use the support vector machine as the constraint condition of the nonlinear space; scale the constraint condition of the nonlinear space M; find the projection matrix And the optimal solution of the classifier, the recognition model integrated with metric learning and support vector machine is used for pedestrian recognition, and the recognition rate is obtained.

进一步地,所述基于度量学习和支持向量机相集成的行人再识别方法的具体步骤如下:Further, the specific steps of the pedestrian re-identification method based on metric learning and support vector machine integration are as follows:

Stpe1、把a,b视角下行人的特征全部投影到同一个非线性空间M∈Rm×n内,利用公式

Figure BDA0001916852160000041
找出在a视角下与b视角下最相似但不是自己的行人
Figure BDA0001916852160000042
来生成带有行人标签信息的行人特征矩阵xc;Step1. Project all the features of pedestrians in a and b perspectives into the same nonlinear space M∈R m×n , using the formula
Figure BDA0001916852160000041
Find the pedestrian who is most similar to the person in the perspective a and the perspective b but is not yourself
Figure BDA0001916852160000042
to generate a pedestrian feature matrix x c with pedestrian label information;

Stpe2、在非线性空间M内,要求每一个a视角下的第i个人

Figure BDA0001916852160000043
与在b视角下与
Figure BDA0001916852160000044
最相似的但不是同一个人的行人
Figure BDA0001916852160000045
之间的度量距离要小于a视角下的第i个人Stpe2. In the nonlinear space M, the ith person under each a view angle is required
Figure BDA0001916852160000043
and in view b and
Figure BDA0001916852160000044
Pedestrians who are most similar but not the same person
Figure BDA0001916852160000045
The metric distance between them is smaller than the ith person under a view

Figure BDA0001916852160000046
与它自己在b视角下
Figure BDA0001916852160000047
之间的度量距离,即
Figure BDA0001916852160000046
with itself in b's perspective
Figure BDA0001916852160000047
the metric distance between

Figure BDA0001916852160000048
Figure BDA0001916852160000048

Stpe3、如果所有的行人特征都按Stpe2处理,投影后的特征本身就满足上述情况就会出现过拟合情况,所以如果投影后的特征本身就满足同一个行人穿着差异巨大则按Step2处理,如果不同行人穿着相似则不做任何处理,则

Figure BDA0001916852160000049
取0;即Stpe3. If all pedestrian features are processed according to Stpe2, the projected feature itself will meet the above situation, and overfitting will occur. Therefore, if the projected feature itself satisfies the huge difference in the wearing of the same pedestrian, then it will be processed according to Step2. If If different pedestrians wear similar clothes, no action will be taken, then
Figure BDA0001916852160000049
take 0; that is

Figure BDA00019168521600000410
Figure BDA00019168521600000410

Stpe4、行人的标签信息的设置方式为:如果

Figure BDA0001916852160000051
Figure BDA0001916852160000052
是同一人,则设置它们的标签信息yij为1,如果不是则为-1;即Stpe4, the setting method of the pedestrian's label information is: if
Figure BDA0001916852160000051
and
Figure BDA0001916852160000052
are the same person, set their label information y ij to 1, or -1 if they are not; i.e.

Figure BDA0001916852160000053
Figure BDA0001916852160000053

其中,

Figure BDA0001916852160000054
表示在支持向量机中
Figure BDA0001916852160000055
行人特征之间的度量距离;in,
Figure BDA0001916852160000054
represented in a support vector machine
Figure BDA0001916852160000055
the metric distance between pedestrian features;

Stpe5、传统支持向量机引入一个约束变量ξij即为不等式

Figure BDA0001916852160000056
其中,w为支持向量机的分类器;再把非线性空间M的约束条件引入支持向量机中,即:Stpe5, the traditional support vector machine introduces a constraint variable ξ ij which is the inequality
Figure BDA0001916852160000056
Among them, w is the classifier of the support vector machine; then the constraints of the nonlinear space M are introduced into the support vector machine, namely:

Figure BDA0001916852160000057
Figure BDA0001916852160000057

s.t(yij(w(Mxai-Mxbj)+c)>1-ξij)st(y ij (w(Mx ai -Mx bj )+c)>1-ξ ij )

Stpe6、把非线性空间M的约束条件(yij(w(Mxai-Mxbj)+c)>1-ξij)进行适当的缩放处理,将条件约束松弛优化后将ξij消去,即Step 6. Perform appropriate scaling processing on the constraint condition (y ij (w(Mx ai -Mx bj )+c)>1-ξ ij ) of the nonlinear space M, and eliminate ξ ij after the conditional constraints are relaxed and optimized, that is,

Figure BDA0001916852160000058
Figure BDA0001916852160000058

Stpe7、找到Stpe6公式的最优解,训练学习得到投影矩阵和分类器w;再用投影矩阵和分类器w在度量学习和支持向量机相集成的识别模型中进行行人识别,得到识别率s,其中得到识别率的度量学习和支持向量机相集成的识别模型如下:Stpe7, find the optimal solution of the Stpe6 formula, train and learn to get the projection matrix and the classifier w; then use the projection matrix and the classifier w to perform pedestrian recognition in the recognition model integrated with metric learning and support vector machine, and get the recognition rate s, The metric learning of the recognition rate and the recognition model integrated with the support vector machine are as follows:

Figure BDA0001916852160000059
Figure BDA0001916852160000059

其中,c的作用是限制分类相似值的范围。Among them, the role of c is to limit the range of categorical similarity values.

为了和已有的方法进行比较,本发明采用VIPER、iLIDS-IVD、CUHK01、PRID2011和PRID405S五个数据集上进行行人再识别实验,并采用十折交叉验证的平均值作为最终结果。In order to compare with existing methods, the present invention uses five datasets of VIPER, iLIDS-IVD, CUHK01, PRID2011 and PRID405S to conduct pedestrian re-identification experiments, and uses the average value of ten-fold cross-validation as the final result.

评价指标与对比方法一致,采用CMC累计曲线来作为评估指标(虽然CMC曲线图是列出来rank1-rank20的匹配率,但是在实际图片检索等运用中,往往最看重的是rank1的值)。The evaluation index is consistent with the comparison method, and the CMC cumulative curve is used as the evaluation index (although the CMC curve graph lists the matching rate of rank1-rank20, in actual image retrieval and other applications, the value of rank1 is often the most important).

为了验证支持向量机的效果,把提出的度量模型作为模型一(图2中所示的Ours-svm),度量学习加支持向量机的模型作为模型二(图2中所示的Ours)。两个模型在VIPER、iLIDS-IVD、PRID2011和PRID405S这四个数据集上的CMC曲线图2所示。可见在rank5之前度量学习加支持向量机对提高行人识别率效果明显,随着rank值的增加逐渐趋于稳定。In order to verify the effect of SVM, the proposed metric model is taken as model one (Ours-svm shown in Fig. 2), and the model of metric learning plus SVM is taken as model two (Ours shown in Fig. 2). The CMC curves of the two models on the four datasets VIPER, iLIDS-IVD, PRID2011 and PRID405S are shown in Figure 2. It can be seen that before rank 5, metric learning and support vector machine have obvious effect on improving the pedestrian recognition rate, and it gradually tends to be stable with the increase of rank value.

VIPeR数据集上,PCCA、LFDA、KISSME、LADF、Mid-filter、ECM、MFA、kLFDA、RD和SR45模型比较,结果如表1所示。虽然CMC曲线图是列出来rank1-rank20的匹配率,但是在实际图片检索等运用中,往往最看重的是rank1的值,从表1可以看出我们提出的方法rank1和rank5效果是最好的。On the VIPeR dataset, PCCA, LFDA, KISSME, LADF, Mid-filter, ECM, MFA, kLFDA, RD and SR45 models are compared, and the results are shown in Table 1. Although the CMC curve graph lists the matching rate of rank1-rank20, in actual image retrieval and other applications, the value of rank1 is often the most important. From Table 1, we can see that our proposed methods rank1 and rank5 are the best. .

表1:各种方法在VIPER数据集上的比较结果,列出了rank1、rank5、rank10和rank20的匹配率(%)Table 1: Comparison results of various methods on the VIPER dataset, listing the matching rates (%) of rank1, rank5, rank10 and rank20

MethodsMethods Rank1Rank1 Rank5Rank5 Rank10Rank10 Rank20Rank20 PCCAPCCA 19.319.3 48.948.9 64.964.9 80.380.3 LFDALFDA 19.719.7 46.746.7 62.162.1 77.077.0 KISSMEKISSME 19.619.6 48.048.0 62.262.2 77.077.0 LADFLADF 29.329.3 61.061.0 76.076.0 86.286.2 Mid-filterMid-filter 29.129.1 52.352.3 66.066.0 79.979.9 ECMECM 38.238.2 67.267.2 78.378.3 87.987.9 MFAMFA 32.232.2 66.066.0 79.779.7 90.690.6 RDRD 33.333.3 41.541.5 78.478.4 88.588.5 kLFDAkLFDA 32.332.3 65.865.8 79.779.7 90.690.6 SRSR 32.932.9 62.062.0 75.975.9 89.289.2 OursOurs 55.555.5 67.267.2 73.273.2 80.580.5

iLIDS-IDV数据集上,SDALF-SS、Color+LBP+DTW、ISR、DVR、DVDL、PHDL+WHOS+STFV3D、Salience+DVR、KISSME比较,结果如表2所示。从表2可以看出本发明提出的方法rank1和rank5识别率都是最高的。On the iLIDS-IDV dataset, SDALF-SS, Color+LBP+DTW, ISR, DVR, DVDL, PHDL+WHOS+STFV3D, Salience+DVR, KISSME are compared, and the results are shown in Table 2. It can be seen from Table 2 that the methods rank1 and rank5 proposed by the present invention have the highest recognition rates.

表2:各种方法在iLIDS-IVD数据集上的比较结果,列出了rank1、rank5、rank10和rank20的匹配率(%)Table 2: Comparison results of various methods on the iLIDS-IVD dataset, listing the matching rates (%) of rank1, rank5, rank10 and rank20

Figure BDA0001916852160000061
Figure BDA0001916852160000061

Figure BDA0001916852160000071
Figure BDA0001916852160000071

CUHK01数据集上实验与Rcca、ITML、KISSME、GenericMetric68、SalMatch、kLFDA、MidFilter、MirrorKMFA、LOMO+LADF、LOMO+XQDA比较结果如表3所示,从表3中可以看出本发明提出的方法在rank1上识别效果最好。The comparison results of the experiments on the CUHK01 data set and Rcca, ITML, KISSME, GenericMetric68, SalMatch, kLFDA, MidFilter, MirrorKMFA, LOMO+LADF, LOMO+XQDA are shown in Table 3. It can be seen from Table 3 that the method proposed by the present invention is The recognition effect on rank1 is the best.

表3:各种方法在CUHK01数据集上的比较结果,列出了rank1、rank5、rank10和rank20的匹配率(%)Table 3: Comparison results of various methods on the CUHK01 dataset, listing the matching rates (%) of rank1, rank5, rank10 and rank20

MethodsMethods Rank1Rank1 Rank5Rank5 Rank10Rank10 Rank20Rank20 RccaRcca 14.914.9 32.632.6 43.843.8 55.555.5 ITMLITML 16.016.0 35.235.2 45.645.6 59.859.8 KISSMEKISSME 10.310.3 27.227.2 37.537.5 49.749.7 GenericMetricGenericMetric 20.020.0 43.643.6 56.056.0 69.369.3 SalMatchSalMatch 28.528.5 45.945.9 55.755.7 68.068.0 kLFDAkLFDA 26.126.1 49.449.4 58.458.4 71.871.8 MidFilterMidFilter 34.334.3 55.155.1 65.065.0 74.974.9 MirrorKMFAMirrorKMFA 40.440.4 64.664.6 75.375.3 84.184.1 LOMO+LADFLOMO+LADF 58.058.0 83.783.7 90.590.5 94.994.9 LOMO+XQDALOMO+XQDA 63.263.2 83.983.9 90.090.0 94.494.4 OursOurs 64.564.5 81.981.9 8585 8888

PRID 2011数据集上,PPLM、RDC、LOMO+LADF、MetricEnsemble、LOMO+M、XQDA、LOMO+XQDA、DVR、Salience+DVR比较,结果如表4所示,除了rank20效果不太理想,本发明方法rank1、rank2和rank10识别效果都是最好的。On the PRID 2011 data set, PPLM, RDC, LOMO+LADF, MetricEnsemble, LOMO+M, XQDA, LOMO+XQDA, DVR, Salience+DVR are compared, and the results are shown in Table 4. Except for rank20, the effect is not ideal. The recognition results of rank1, rank2 and rank10 are all the best.

表4:各种方法在PRID 2011数据集上的比较结果,列出了rank1、rank5、rank10和rank20的匹配率(%)Table 4: Comparison results of various methods on PRID 2011 dataset, listing the matching rate (%) of rank1, rank5, rank10 and rank20

MethodsMethods Rank1Rank1 Rank5Rank5 Rank10Rank10 Rank20Rank20 PPLMPPLM 15.015.0 32.032.0 42.042.0 54.054.0 RDCRDC 15.515.5 38.838.8 53.253.2 69.069.0 LOMO+LADFLOMO+LADF 16.216.2 34.034.0 44.444.4 59.559.5 MetricEnsembleMetricEnsemble 17.917.9 39.039.0 50.050.0 62.062.0 LOMO+MLOMO+M 15.215.2 36.136.1 48.348.3 60.460.4 XQDAXQDA 24.624.6 49.349.3 62.862.8 76.376.3 LOMO+XQDALOMO+XQDA 26.726.7 49.949.9 61.961.9 73.873.8 DVRDVR 28.928.9 55.355.3 65.565.5 82.882.8 Salience+DVRSalience+DVR 41.741.7 64.564.5 77.577.5 88.888.8 OursOurs 72.372.3 86.886.8 9292 96.796.7

PRID450S数据集上,ELF、KISSME、EIML、SCNCD、ECM、TSR、MEDVL、KISSME-MGT、LOMO+LADF、KLFDA-MGT、MirrorKMFA等方法比较,结果如表5所示,本发明方法rank1效果最好。On the PRID450S data set, ELF, KISSME, EIML, SCNCD, ECM, TSR, MEDVL, KISSME-MGT, LOMO+LADF, KLFDA-MGT, MirrorKMFA and other methods are compared. .

表5:各种方法在PRID_450S数据集上的比较结果,列出了rank1、rank5、rank10和rank20的匹配率(%)Table 5: Comparison results of various methods on PRID_450S dataset, listing the matching rate (%) of rank1, rank5, rank10 and rank20

MethodsMethods Rank1Rank1 Rank5Rank5 Rank10Rank10 Rank20Rank20 ELFELF 30.630.6 -- 73.673.6 84.284.2 KISSMEKISSME 33.033.0 -- 71.071.0 79.079.0 EIMLEIML 35.035.0 -- 68.068.0 7777 SCNCDSCNCD 41.541.5 66.666.6 75.975.9 84.484.4 ECMECM 41.941.9 66.366.3 76.976.9 84.984.9 TSRTSR 44.944.9 71.771.7 77.577.5 86.786.7 MEDVLMEDVL 45.945.9 73.073.0 82.982.9 91.191.1 KISSME-MGTKISSME-MGT 46.146.1 73.373.3 83.383.3 90.790.7 LOMO+LADFLOMO+LADF 47.847.8 74.774.7 82.882.8 90.990.9 KLFDA-MGTKLFDA-MGT 46.146.1 73.373.3 83.383.3 90.790.7 MirrorKMFAMirrorKMFA 55.455.4 79.379.3 87.887.8 93.993.9 OursOurs 58.458.4 68.768.7 73.173.1 80.280.2

上面结合附图对本发明的具体实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The specific embodiments of the present invention have been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned embodiments, and can also be made within the scope of knowledge possessed by those of ordinary skill in the art without departing from the purpose of the present invention. Various changes.

Claims (1)

1.一种基于度量学习和支持向量机相集成的行人再识别方法,其特征在于:首先生成带有行人标签信息的行人特征矩阵;对度量行人距离的非线性空间M做处理;设置支持向量机内用到的行人标签信息;支持向量机引入约束变量,再把支持向量机作为非线性空间的约束条件;对非线性空间M的约束条件进行缩放处理;找到投影矩阵和分类器的最优解,用度量学习和支持向量机相集成的识别模型进行行人识别,得到识别率;1. a pedestrian re-identification method based on metric learning and support vector machine integration, it is characterized in that: first generate a pedestrian feature matrix with pedestrian label information; do processing to the nonlinear space M of measuring pedestrian distance; set support vector The pedestrian label information used in the machine; the support vector machine introduces the constraint variable, and then the support vector machine is used as the constraint condition of the nonlinear space; the constraint condition of the nonlinear space M is scaled; the optimal projection matrix and classifier are found. Solution, use the recognition model integrated with metric learning and support vector machine to recognize pedestrians, and get the recognition rate; 所述基于度量学习和支持向量机相集成的行人再识别方法的具体步骤如下:The specific steps of the pedestrian re-identification method based on metric learning and support vector machine integration are as follows: Stpe1、把a,b视角下行人的特征全部投影到同一个非线性空间M∈Rm×n内,利用公式
Figure FDA0003031058060000011
找出在a视角下与b视角下最相似但不是自己的行人
Figure FDA0003031058060000012
来生成带有行人标签信息的行人特征矩阵xc
Step1. Project all the features of pedestrians in a and b perspectives into the same nonlinear space M∈R m×n , using the formula
Figure FDA0003031058060000011
Find the pedestrian who is most similar to the person in the perspective a and the perspective b but is not yourself
Figure FDA0003031058060000012
to generate a pedestrian feature matrix x c with pedestrian label information;
Stpe2、在非线性空间M内,要求每一个a视角下的第i个人
Figure FDA0003031058060000013
与在b视角下与
Figure FDA0003031058060000014
最相似的但不是同一个人的行人
Figure FDA0003031058060000015
之间的度量距离要小于a视角下的第i个人
Figure FDA0003031058060000016
与它自己在b视角下
Figure FDA0003031058060000017
之间的度量距离,即
Stpe2. In the nonlinear space M, the ith person under each a view angle is required
Figure FDA0003031058060000013
and in view b and
Figure FDA0003031058060000014
Pedestrians who are most similar but not the same person
Figure FDA0003031058060000015
The metric distance between them is smaller than the ith person under a view
Figure FDA0003031058060000016
with itself in b's perspective
Figure FDA0003031058060000017
the metric distance between
Figure FDA0003031058060000018
Figure FDA0003031058060000018
Stpe3、如果所有的行人特征都按Stpe2处理,投影后的特征本身就满足上述情况就会出现过拟合情况,所以如果投影后的特征本身就满足同一个行人穿着差异巨大则按Stpe2处理,如果不同行人穿着相似则不做任何处理,则
Figure FDA0003031058060000019
取0;即
Stpe3. If all pedestrian features are processed according to Stpe2, the projected feature itself will meet the above conditions, and overfitting will occur. Therefore, if the projected feature itself satisfies the huge difference in the wearing of the same pedestrian, it will be processed according to Stpe2. If If different pedestrians wear similar clothes, no action will be taken, then
Figure FDA0003031058060000019
take 0; that is
Figure FDA00030310580600000110
Figure FDA00030310580600000110
Stpe4、行人的标签信息的设置方式为:如果
Figure FDA00030310580600000111
Figure FDA00030310580600000112
是同一人,则设置它们的标签信息yij为1,如果不是则为-1;即
Stpe4, the setting method of the pedestrian's label information is: if
Figure FDA00030310580600000111
and
Figure FDA00030310580600000112
are the same person, set their label information y ij to 1, or -1 if they are not; i.e.
Figure FDA0003031058060000021
Figure FDA0003031058060000021
其中,
Figure FDA0003031058060000022
表示在支持向量机中
Figure FDA0003031058060000023
行人特征之间的度量距离;
in,
Figure FDA0003031058060000022
represented in a support vector machine
Figure FDA0003031058060000023
the metric distance between pedestrian features;
Stpe5、传统支持向量机引入一个约束变量ξij即为不等式
Figure FDA0003031058060000024
其中,w为支持向量机的分类器;再把非线性空间M的约束条件引入支持向量机中,即:
Stpe5, the traditional support vector machine introduces a constraint variable ξ ij which is the inequality
Figure FDA0003031058060000024
Among them, w is the classifier of the support vector machine; then the constraints of the nonlinear space M are introduced into the support vector machine, namely:
Figure FDA0003031058060000025
Figure FDA0003031058060000025
Figure FDA0003031058060000026
Figure FDA0003031058060000026
Stpe6、把非线性空间M的约束条件(yij(w(Mxai-Mxbj)+c)>1-ξij)进行适当的缩放处理,将条件约束松弛优化后将ξij消去,即Step 6. Perform appropriate scaling processing on the constraint condition (y ij (w(Mx ai -Mx bj )+c)>1-ξ ij ) of the nonlinear space M, and eliminate ξ ij after the conditional constraints are relaxed and optimized, that is,
Figure FDA0003031058060000027
Figure FDA0003031058060000027
Stpe7、找到Stpe6公式的最优解,训练学习得到投影矩阵和分类器w;再用投影矩阵和分类器w在度量学习和支持向量机相集成的识别模型中进行行人识别,得到识别率s,其中得到识别率的度量学习和支持向量机相集成的识别模型如下:Stpe7, find the optimal solution of the Stpe6 formula, train and learn to get the projection matrix and the classifier w; then use the projection matrix and the classifier w to perform pedestrian recognition in the recognition model integrated with metric learning and support vector machine, and get the recognition rate s, Among them, the metric learning of the recognition rate and the recognition model integrated with the support vector machine are as follows:
Figure FDA0003031058060000028
Figure FDA0003031058060000028
其中,c的作用是限制分类相似值的范围。Among them, the role of c is to limit the range of categorical similarity values.
CN201811576219.0A 2018-12-22 2018-12-22 A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration Active CN109815815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811576219.0A CN109815815B (en) 2018-12-22 2018-12-22 A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811576219.0A CN109815815B (en) 2018-12-22 2018-12-22 A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration

Publications (2)

Publication Number Publication Date
CN109815815A CN109815815A (en) 2019-05-28
CN109815815B true CN109815815B (en) 2021-06-18

Family

ID=66602380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811576219.0A Active CN109815815B (en) 2018-12-22 2018-12-22 A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration

Country Status (1)

Country Link
CN (1) CN109815815B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017212206A1 (en) * 2016-06-06 2017-12-14 Cirrus Logic International Semiconductor Limited Voice user interface

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101705A1 (en) * 2006-10-31 2008-05-01 Motorola, Inc. System for pattern recognition with q-metrics
KR101972356B1 (en) * 2010-12-21 2019-04-25 한국전자통신연구원 An apparatus and a method for detecting upper body
US10552544B2 (en) * 2016-09-12 2020-02-04 Sriram Chakravarthy Methods and systems of automated assistant implementation and management
US20180173940A1 (en) * 2016-12-19 2018-06-21 Canon Kabushiki Kaisha System and method for matching an object in captured images
CN106778921A (en) * 2017-02-15 2017-05-31 张烜 Personnel based on deep learning encoding model recognition methods again
CN107844752A (en) * 2017-10-20 2018-03-27 常州大学 A kind of recognition methods again of the pedestrian based on block rarefaction representation
CN108345860A (en) * 2018-02-24 2018-07-31 江苏测联空间大数据应用研究中心有限公司 Personnel based on deep learning and learning distance metric recognition methods again
CN108509854B (en) * 2018-03-05 2020-11-17 昆明理工大学 Pedestrian re-identification method based on projection matrix constraint and discriminative dictionary learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017212206A1 (en) * 2016-06-06 2017-12-14 Cirrus Logic International Semiconductor Limited Voice user interface

Also Published As

Publication number Publication date
CN109815815A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
Yan et al. Multitask linear discriminant analysis for view invariant action recognition
CN104599275B (en) The RGB-D scene understanding methods of imparametrization based on probability graph model
Ma et al. Generalized pooling for robust object tracking
CN106504064A (en) Clothes classification based on depth convolutional neural networks recommends method and system with collocation
Chang-Yeon Face Detection using LBP features
CN105719285A (en) Pedestrian detection method based on directional chamfering distance characteristics
Thom et al. Facial attribute recognition: A survey
CN111814705A (en) A pedestrian re-identification method based on batch block occlusion network
CN105825233A (en) Pedestrian detection method based on random fern classifier of online learning
Lee et al. Robust fingertip extraction with improved skin color segmentation for finger gesture recognition in Human-robot interaction
Naveen et al. A deep convolution neural network for facial expression recognition
Cai et al. Beyond photo-domain object recognition: Benchmarks for the cross-depiction problem
Shukla et al. Face recognition using LBPH and CNN
Miyazaki Fresnel equations
Larochelle Few-shot learning
CN109815815B (en) A Pedestrian Re-identification Method Based on Metric Learning and Support Vector Machine Integration
Liu et al. Face Modeling
Hanselmann et al. Deep fisher faces
Lin Fluorescent Lighting
Chellappa et al. Feature selection
Ackermann Factorization
Loy Face detection
Chai et al. Explore semantic pixel sets based local patterns with information entropy for face recognition
Fang et al. Discriminant feature manifold for facial aging estimation
Kumar et al. Face Alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant