[go: up one dir, main page]

CN112446270B - Person re-identification network training method, person re-identification method and device - Google Patents

Person re-identification network training method, person re-identification method and device Download PDF

Info

Publication number
CN112446270B
CN112446270B CN201910839017.9A CN201910839017A CN112446270B CN 112446270 B CN112446270 B CN 112446270B CN 201910839017 A CN201910839017 A CN 201910839017A CN 112446270 B CN112446270 B CN 112446270B
Authority
CN
China
Prior art keywords
image
pedestrian
training
anchor
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910839017.9A
Other languages
Chinese (zh)
Other versions
CN112446270A (en
Inventor
魏龙辉
张天宇
谢凌曦
田奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN201910839017.9A priority Critical patent/CN112446270B/en
Priority to PCT/CN2020/113041 priority patent/WO2021043168A1/en
Publication of CN112446270A publication Critical patent/CN112446270A/en
Application granted granted Critical
Publication of CN112446270B publication Critical patent/CN112446270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了行人再识别网络的训练方法、行人再识别方法和装置。涉及人工智能领域,具体涉及计算机视觉领域。该方法包括:获取M个训练图像和该M个训练图像的标注数据;对行人再识别网络的网络参数进行初始化处理,以得到所述行人再识别网络的网络参数的初始值;将M个训练图像中的一批训练图像输入到行人再识别网络进行特征提取,得到这一批训练图像中的每个训练图像的特征向量,然后根据这一批训练图像的特征向量确定损失函数,并根据损失函数的函数值得到满足预设要求的行人再识别网络。本申请可以在单图像拍摄设备标注数据情况下训练出性能较好的行人再识别网络。

The present application provides a training method for a pedestrian re-identification network, a pedestrian re-identification method and a device. It relates to the field of artificial intelligence, and specifically to the field of computer vision. The method includes: obtaining M training images and the annotation data of the M training images; initializing the network parameters of the pedestrian re-identification network to obtain the initial values of the network parameters of the pedestrian re-identification network; inputting a batch of training images from the M training images into the pedestrian re-identification network for feature extraction, obtaining a feature vector of each training image in the batch of training images, and then determining a loss function based on the feature vectors of the batch of training images, and obtaining a pedestrian re-identification network that meets preset requirements based on the function value of the loss function. The present application can train a pedestrian re-identification network with better performance under the condition of annotated data from a single image capture device.

Description

行人再识别网络的训练方法、行人再识别方法和装置Person re-identification network training method, person re-identification method and device

技术领域Technical Field

本申请涉及计算机视觉领域,并且更具体地,涉及一种行人再识别网络的训练方法、行人再识别方法和装置。The present application relates to the field of computer vision, and more specifically, to a training method for a pedestrian re-identification network, a pedestrian re-identification method and a device.

背景技术Background technique

计算机视觉是各个应用领域,如制造业、检验、文档分析、医疗诊断,和军事等领域中各种智能/自主系统中不可分割的一部分,它是一门关于如何运用照相机/图像拍摄设备和计算机来获取我们所需的,被拍摄对象的数据与信息的学问。形象地说,就是给计算机安装上眼睛(照相机/图像拍摄设备)和大脑(算法)用来代替人眼对目标进行识别、跟踪和测量等,从而使计算机能够感知环境。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工系统从图像或多维数据中“感知”的科学。总的来说,计算机视觉就是用各种成像系统代替视觉器官获取输入信息,再由计算机来代替大脑对这些输入信息完成处理和解释。计算机视觉的最终研究目标就是使计算机能像人那样通过视觉观察和理解世界,具有自主适应环境的能力。Computer vision is an integral part of various intelligent/autonomous systems in various application fields, such as manufacturing, inspection, document analysis, medical diagnosis, and military. It is a discipline about how to use cameras/image capture devices and computers to obtain the data and information of the objects we need. Figuratively speaking, it is to install eyes (cameras/image capture devices) and brains (algorithms) on computers to replace human eyes to identify, track, and measure targets, so that computers can perceive the environment. Because perception can be seen as extracting information from sensory signals, computer vision can also be seen as a science that studies how to make artificial systems "perceive" from images or multidimensional data. In general, computer vision uses various imaging systems to replace visual organs to obtain input information, and then computers replace the brain to complete the processing and interpretation of these input information. The ultimate research goal of computer vision is to enable computers to observe and understand the world through vision like humans, and have the ability to adapt to the environment autonomously.

监控领域常常涉及行人再识别的问题,行人重识别(person re-identification,ReID)也可以称为行人再识别,行人再识别是利用计算机视觉技术判断图像或者视频序列中是否存在特定行人的技术。The field of surveillance often involves the problem of pedestrian re-identification. Person re-identification (ReID) can also be called pedestrian re-identification. Pedestrian re-identification is a technology that uses computer vision technology to determine whether there is a specific pedestrian in an image or video sequence.

传统方案一般是训练数据以及跨图像拍摄设备的标注数据,对行人再识别网络进行训练,使得行人再识别网络能够区分开不同行人的图像,进而进行行人的识别。但是,传统方案中的训练数据中包括同一行人由不同图像拍摄设备拍摄的图像,对于这种由不同图像拍摄设备拍摄的图像需要人工进行标注,使得同一行人由不同图像拍摄设备拍摄的图像关联起来(也就是将行人进行跨图像拍摄设备的关联)。但是,在很多场景下,将行人进行跨图像拍摄设备的关联非常困难,尤其是当人数增多、图像拍摄设备数量增多时,进行跨图像拍摄设备关联的难度也随之大幅提升。数据标注的经济成本高,时间消耗大。Traditional solutions generally use training data and annotated data across image capture devices to train the pedestrian re-identification network so that the pedestrian re-identification network can distinguish between images of different pedestrians and then identify the pedestrians. However, the training data in the traditional solution includes images of the same pedestrian taken by different image capture devices. Such images taken by different image capture devices need to be manually annotated so that the images of the same pedestrian taken by different image capture devices can be associated (that is, the pedestrians are associated across image capture devices). However, in many scenarios, it is very difficult to associate pedestrians across image capture devices, especially when the number of people and the number of image capture devices increase, the difficulty of cross-image capture device association also increases significantly. Data annotation has high economic costs and consumes a lot of time.

发明内容Summary of the invention

本申请提供一种行人再识别网络的训练方法、行人再识别方法和装置,以在单图像拍摄设备标注数据情况下训练出性能较好的行人再识别网络。The present application provides a training method, a pedestrian re-identification method and a device for a pedestrian re-identification network, so as to train a pedestrian re-identification network with better performance under the condition of annotated data by a single image shooting device.

第一方面,提供了一种行人再识别网络的训练方法,该方法包括:In a first aspect, a method for training a person re-identification network is provided, the method comprising:

步骤1:获取训练数据;Step 1: Get training data;

其中,步骤1中的训练数据包括M个训练图像和M个训练图像的标注数据,M为大于1的整数;The training data in step 1 includes M training images and labeled data of the M training images, where M is an integer greater than 1;

步骤2:对行人再识别网络的网络参数进行初始化处理,以得到行人再识别网络的网络参数的初始值;Step 2: Initialize the network parameters of the pedestrian re-identification network to obtain the initial values of the network parameters of the pedestrian re-identification network;

重复执行下面的步骤3至步骤5,直到行人再识别网络满足预设要求;Repeat steps 3 to 5 below until the pedestrian re-identification network meets the preset requirements;

步骤3:将M个训练图像中的一批训练图像输入到行人再识别网络进行特征提取,得到一批训练图像中的每个训练图像的特征向量;Step 3: Input a batch of training images from the M training images into the person re-identification network for feature extraction to obtain a feature vector for each training image in the batch of training images;

步骤4:根据一批训练图像的特征向量确定损失函数的函数值;Step 4: Determine the function value of the loss function based on the feature vectors of a batch of training images;

步骤5:根据损失函数的函数值对行人再识别网络的网络参数进行更新。Step 5: Update the network parameters of the person re-identification network according to the function value of the loss function.

在上述步骤1中,在训练数据的M个训练图像中,每个训练图像包括行人,每个训练图像的标注数据包括每个训练图像中的行人所在的包围框和行人标识信息,不同的行人对应不同的行人标识信息,在M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备。该M个训练图像可以是对行人再识别网络进行训练时采用的所有的训练图像,在具体训练过程,可以每次选择该M个训练图像中的一批训练图像输入到行人再识别网络中进行处理。In the above step 1, among the M training images of the training data, each training image includes a pedestrian, and the annotation data of each training image includes the bounding box where the pedestrian in each training image is located and the pedestrian identification information. Different pedestrians correspond to different pedestrian identification information. Among the M training images, the training images with the same pedestrian identification information come from the same image capture device. The M training images can be all the training images used when training the pedestrian re-identification network. In the specific training process, a batch of training images from the M training images can be selected each time and input into the pedestrian re-identification network for processing.

上述图像拍摄设备具体可以是摄像机、照相机等能够获取行人图像的设备。The above-mentioned image capturing device may specifically be a device such as a video camera or a still camera that can capture images of pedestrians.

上述步骤1中的行人标识信息也可以称为行人身份标识信息,是用于表示标识行人身份的一种信息,每个行人可以对应唯一的行人标识信息,该行人标识信息的表示方式有多种,只要能够指示行人的身份信息即可,例如,该行人标识信息具体可以是行人身份(identity,ID),也就是说,可以为每一个行人分配一个唯一的ID。The pedestrian identification information in the above step 1 can also be called pedestrian identity identification information, which is a kind of information used to indicate the identity of the pedestrian. Each pedestrian can correspond to unique pedestrian identification information. There are many ways to represent the pedestrian identification information, as long as it can indicate the identity information of the pedestrian. For example, the pedestrian identification information can specifically be the pedestrian identity (identity, ID), that is, a unique ID can be assigned to each pedestrian.

在上述步骤2中可以随机设置行人再识别网络的网络参数,得到行人再识别网络的网络参数的初始值。In the above step 2, the network parameters of the pedestrian re-identification network can be randomly set to obtain the initial values of the network parameters of the pedestrian re-identification network.

在上述步骤3中,上述一批训练图像可以包括N个锚点图像,其中,该N个锚点图像是上述一批训练图像中的任意N个训练图像,该N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像。In the above step 3, the above batch of training images may include N anchor images, wherein the N anchor images are any N training images in the above batch of training images, and each anchor image in the N anchor images corresponds to a most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image.

下面对每个锚点图像对应的最难正样本图像,第一最难负样本图像和第二最难负样本图像进行说明。The most difficult positive sample image, the first most difficult negative sample image and the second most difficult negative sample image corresponding to each anchor point image are described below.

每个锚点图像对应的最难正样本图像:上述一批训练图像中与每个锚点图像的行人标识信息相同,并且与每个锚点图像的特征向量之间的距离最远的训练图像;The most difficult positive sample image corresponding to each anchor image: the training image in the above batch of training images that has the same pedestrian identification information as each anchor image and has the farthest distance from the feature vector of each anchor image;

每个锚点图像对应的第一最难负样本图像:上述一批训练图像中与每个锚点图像来自于同一图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像;The first most difficult negative sample image corresponding to each anchor image: a training image in the above batch of training images that comes from the same image capture device as each anchor image, has different pedestrian identification information from each anchor image, and has the closest distance to the feature vector of each anchor image;

每个锚点图像对应的第二最难负样本图像:上述一批训练图像中与每个锚点图像来自不同图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像。The second most difficult negative sample image corresponding to each anchor image: a training image in the above batch of training images that comes from a different image capture device than each anchor image, has different pedestrian identification information from each anchor image, and has the closest distance to the feature vector of each anchor image.

在上述步骤4中,损失函数的函数值是N个第一损失函数的函数值经过平均处理得到的。其中,上述N个第一损失函数中的每个第一损失函数的函数值是根据N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的。In the above step 4, the function value of the loss function is obtained by averaging the function values of the N first loss functions. The function value of each of the N first loss functions is calculated based on the first difference and the second difference corresponding to each of the N anchor images.

上述N为正整数,上述N小于M。当N=1时,只有一个第一损失函数的函数值,此时可以直接将该第一损失函数的函数值作为步骤4中的损失函数的函数值。The above N is a positive integer, and the above N is less than M. When N=1, there is only one function value of the first loss function, and at this time, the function value of the first loss function can be directly used as the function value of the loss function in step 4.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值和第二差值的和。Optionally, the function value of each of the first loss functions is the sum of the first difference and the second difference corresponding to each anchor point image.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值、第二差值和其他常数项的和。Optionally, the function value of each of the above-mentioned first loss functions is the sum of the first difference, the second difference and other constant terms corresponding to each anchor point image.

下面对第一差值和第二差值以及形成第一差值和第二差值的各个距离的含义进行说明。The meanings of the first difference and the second difference and the distances forming the first difference and the second difference are explained below.

每个锚点图像对应的第一差值:每个锚点图像对应的最难正样本距离与每个锚点图像对应的第二最难负样本距离的差;The first difference value corresponding to each anchor image: the difference between the most difficult positive sample distance corresponding to each anchor image and the second most difficult negative sample distance corresponding to each anchor image;

每个锚点图像对应的第二差值:每个锚点图像对应的第二最难负样本距离与每个锚点图像对应的第一最难负样本距离的差;The second difference value corresponding to each anchor image: the difference between the second most difficult negative sample distance corresponding to each anchor image and the first most difficult negative sample distance corresponding to each anchor image;

每个锚点图像对应的最难正样本距离:每个锚点图像对应的最难正样本图像的特征向量与每个锚点图像的特征向量的距离;The most difficult positive sample distance corresponding to each anchor image: the distance between the feature vector of the most difficult positive sample image corresponding to each anchor image and the feature vector of each anchor image;

每个锚点图像对应的第二最难负样本距离:每个锚点图像对应的第二最难负样本图像的特征向量与每个锚点图像的特征向量的距离;The second most difficult negative sample distance corresponding to each anchor image: the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor image and the feature vector of each anchor image;

每个锚点图像对应的第一最难负样本距离:每个锚点图像对应的第一最难负样本图像的特征向量与每个锚点图像的特征向量的距离。The first hardest negative sample distance corresponding to each anchor image: the distance between the feature vector of the first hardest negative sample image corresponding to each anchor image and the feature vector of each anchor image.

另外,在本申请中,几个训练图像来自于同一图像拍摄设备是指这几个训练图像是通过同一个图像拍摄设备进行拍摄得到的。In addition, in the present application, several training images coming from the same image capturing device means that these several training images are captured by the same image capturing device.

本申请中,在构造损失函数的过程中考虑到了来自于不同图像拍摄设备和相同图像拍摄设备的最难负样本图像,并在训练过程中使得第一差值和第二差值尽可能的减小,从而能够尽可能的消除图像拍摄设备本身信息对图像信息的干扰,使得训练出来的行人再识别网络能够更准确的从图像中进行特征的提取。In the present application, the most difficult negative sample images from different image capturing devices and the same image capturing device are taken into account in the process of constructing the loss function, and the first difference and the second difference are reduced as much as possible during the training process, so as to eliminate the interference of the image capturing device's own information on the image information as much as possible, so that the trained pedestrian re-identification network can more accurately extract features from the image.

具体地,在对行人再识别网络的训练过程中,通过优化行人再识别网络的网络参数使得第一差值和第二差值尽可能的小,从而使得最难正样本距离与第二最难负样本距离的差以及第二最难负样本距离和第一最难负样本距离的差尽可能的小,进而使得行人再识别网络能够尽可能的区分开最难本图像与第二最难负样本图像的特征,以及第二最难负样本图像与第一最难负样本图像的特征,从而使得训练出来的行人再识别网络能够更好更准确地对图像进行特征提取。Specifically, during the training process of the pedestrian re-identification network, the network parameters of the pedestrian re-identification network are optimized to make the first difference and the second difference as small as possible, thereby making the difference between the most difficult positive sample distance and the second most difficult negative sample distance and the difference between the second most difficult negative sample distance and the first most difficult negative sample distance as small as possible, so that the pedestrian re-identification network can distinguish the features of the most difficult positive image and the second most difficult negative sample image, as well as the features of the second most difficult negative sample image and the first most difficult negative sample image as much as possible, so that the trained pedestrian re-identification network can better and more accurately extract features from the image.

结合第一方面,在第一方面的某些实现方式中,上述行人再识别网络满足预设要求,包括:在满足下列条件(1)至(3)中的至少一种时,行人再识别网络满足预设要求:In conjunction with the first aspect, in certain implementations of the first aspect, the pedestrian re-identification network meets the preset requirements, including: when at least one of the following conditions (1) to (3) is met, the pedestrian re-identification network meets the preset requirements:

(1)行人再识别网络的训练次数大于或者等于预设次数;(1) The number of training times of the person re-identification network is greater than or equal to the preset number;

(2)损失函数的函数值小于或者等于预设阈值;(2) The function value of the loss function is less than or equal to the preset threshold;

(3)行人再识别网络的识别性能达到预设要求。(3) The recognition performance of the pedestrian re-identification network meets the preset requirements.

上述预设阈值可以是经验来灵活设置,当预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above-mentioned preset threshold can be flexibly set based on experience. When the preset threshold is set too large, the pedestrian recognition effect of the trained pedestrian re-identification network may not be good enough, and when the preset threshold is set too small, the function value of the loss function may be difficult to converge during training.

可选地,上述预设阈值的取值范围为[0,0.01]。Optionally, the value range of the above preset threshold is [0, 0.01].

具体地,上述预设阈值的取值可以为0.01。Specifically, the value of the preset threshold may be 0.01.

结合第一方面,在第一方面的某些实现方式中,上述损失函数的函数值小于或者等于预设阈值,包括:第一差值小于第一预设阈值,第二差值小于第二预设阈值。In combination with the first aspect, in certain implementations of the first aspect, the function value of the above-mentioned loss function is less than or equal to a preset threshold, including: the first difference is less than the first preset threshold, and the second difference is less than the second preset threshold.

上述第一预设阈值和第二预设阈值也可以根据经验来确定,当第一预设阈值和第二预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当第一预设阈值和第二预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above-mentioned first preset threshold and second preset threshold can also be determined based on experience. When the first preset threshold and the second preset threshold are set too large, the pedestrian recognition effect of the trained pedestrian re-identification network may not be good enough, and when the first preset threshold and the second preset threshold are set too small, the function value of the loss function may be difficult to converge during training.

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the first preset threshold is [0, 0.4].

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the first preset threshold is [0, 0.4].

具体地,上述第一预设阈值和上述第二阈值均可以取0.1。Specifically, the first preset threshold and the second threshold may both be 0.1.

结合第一方面,在第一方面的某些实现方式中,上述M个训练图像为来自多个图像拍摄设备的训练图像,其中,来自不同图像拍摄设备的训练图像的标记数据是单独标记得到的。In combination with the first aspect, in certain implementations of the first aspect, the M training images are training images from multiple image capturing devices, wherein the labeling data of the training images from different image capturing devices are obtained by separate labeling.

也就是说,针对每个图像拍摄设备的图像可以单独进行标记,而不必考虑不同的图像拍摄设备之间是否会出现相同的行人,具体地,如果图像拍摄设备A拍摄的多个图像中包括行人X,那么,当标记完了图像拍摄设备A拍摄的M个训练图像之后,就不必再从其他的图像拍摄设备拍摄的图像中寻找是否包含行人X的图像,这样就避免了在不同的图像拍摄设备拍摄的图像中寻找同一行人的过程,可以节省大量的标记时间,减少标注的复杂度。That is to say, the images of each image capturing device can be marked separately without considering whether the same pedestrian appears between different image capturing devices. Specifically, if multiple images captured by image capturing device A include pedestrian X, then after marking the M training images captured by image capturing device A, there is no need to search for images containing pedestrian X from images captured by other image capturing devices. This avoids the process of searching for the same pedestrian in images captured by different image capturing devices, saves a lot of marking time, and reduces the complexity of annotation.

第二方面,提供了一种行人再识别方法,该方法包括:获取待识别图像;利用行人再识别网络对待识别图像进行处理,得到待识别图像的特征向量,其中,行人再识别网络是根据上述第一方面的训练方法训练得到的;根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到待识别图像的识别结果。In a second aspect, a pedestrian re-identification method is provided, which includes: obtaining an image to be identified; processing the image to be identified using a pedestrian re-identification network to obtain a feature vector of the image to be identified, wherein the pedestrian re-identification network is trained according to the training method of the first aspect above; comparing the feature vector of the image to be identified with the feature vector of an existing pedestrian image to obtain a recognition result of the image to be identified.

本申请中,由于采用第一方面的训练方法训练得到的行人再识别网络能够更好的进行特征的提取,因此,采用第一方面的训练方法训练得到的行人再识别网络进行行人识别能够取得更好的行人识别结果。In the present application, since the pedestrian re-identification network trained by the training method of the first aspect can better extract features, the pedestrian re-identification network trained by the training method of the first aspect can achieve better pedestrian recognition results for pedestrian recognition.

结合第二方面,在第二方面的某些实现方式中,上述根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到待识别图像的识别结果,包括:输出目标行人图像,以及目标行人图像的属性信息。In combination with the second aspect, in certain implementations of the second aspect, the feature vector of the image to be identified is compared with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be identified, including: outputting the target pedestrian image and the attribute information of the target pedestrian image.

其中,上述目标行人图像可以是已有的行人图像中特征向量与待识别图像的特征向量最相似的行人图像,该目标行人图像的属性信息包括该目标行人图像的拍摄时间,拍摄位置。另外,上述目标行人图像的属性信息中还可以包括行人的身份信息等。The target pedestrian image may be a pedestrian image whose feature vector is most similar to the feature vector of the image to be identified among the existing pedestrian images, and the attribute information of the target pedestrian image includes the shooting time and shooting location of the target pedestrian image. In addition, the attribute information of the target pedestrian image may also include the identity information of the pedestrian.

第三方面,提供了一种行人再识别网络的训练装置,该行人再识别网络的训练装置包括用于执行上述第一方面中的方法中的各个模块。In a third aspect, a training device for a person re-identification network is provided, and the training device for a person re-identification network includes modules for executing the method in the first aspect.

第四方面,提供了一种行人再识别装置,该装置包括用于执行上述第二方面中的方法中的各个模块。In a fourth aspect, a pedestrian re-identification device is provided, which includes various modules for executing the method in the second aspect.

第五方面,提供了一种行人再识别网络的训练装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第一方面中的方法。In a fifth aspect, a training device for a pedestrian re-identification network is provided, the device comprising: a memory for storing programs; a processor for executing the programs stored in the memory, when the programs stored in the memory are executed, the processor is used to execute the method in the above-mentioned first aspect.

第六方面,提供了一种行人再识别装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第二方面中的方法。In a sixth aspect, a pedestrian re-identification device is provided, which includes: a memory for storing programs; a processor for executing the programs stored in the memory, and when the program stored in the memory is executed, the processor is used to execute the method in the above-mentioned second aspect.

第七方面,提供了一种计算机设备,该计算机设备包括上述第三方面中的行人再识别网络的训练装置。In a seventh aspect, a computer device is provided, which includes a training device for a person re-identification network in the third aspect.

在上述第七方面中,该计算机设备具体可以是服务器或者云端设备等等。In the seventh aspect above, the computer device may specifically be a server or a cloud device, etc.

第八方面,提供了一种电子设备,该电子设备包括上述第四方面的行人再识别装置。In an eighth aspect, an electronic device is provided, which includes the pedestrian re-identification device according to the fourth aspect.

在上述第八方面中,电子设备具体可以是移动终端(例如,智能手机),平板电脑,笔记本电脑,增强现实/虚拟现实设备以及车载终端设备等等。In the eighth aspect mentioned above, the electronic device may specifically be a mobile terminal (eg, a smart phone), a tablet computer, a laptop computer, an augmented reality/virtual reality device, a vehicle-mounted terminal device, and the like.

第九方面,提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,该程序代码包括用于执行第一方面或第二方面中的任意一种方法中的步骤的指令。In a ninth aspect, a computer-readable storage medium is provided, wherein the computer-readable storage medium stores a program code, wherein the program code includes instructions for executing the steps in any one of the methods in the first aspect or the second aspect.

第十方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面或第二方面中的任意一种方法。In a tenth aspect, a computer program product comprising instructions is provided, and when the computer program product is run on a computer, the computer is caused to execute any one of the methods in the first aspect or the second aspect above.

第十一方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面或第二方面中的任意一种方法。In the eleventh aspect, a chip is provided, comprising a processor and a data interface, wherein the processor reads instructions stored in a memory through the data interface to execute any one of the methods in the first or second aspect above.

可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行上述第一方面或第二方面中的任意一种方法。Optionally, as an implementation method, the chip may also include a memory, in which instructions are stored, and the processor is used to execute the instructions stored in the memory. When the instructions are executed, the processor is used to execute any one of the methods in the first aspect or the second aspect above.

上述芯片具体可以是现场可编程门阵列FPGA或者专用集成电路ASIC。The above chip can specifically be a field programmable gate array FPGA or an application specific integrated circuit ASIC.

应理解,本申请中,第一方面的方法具体可以是指第一方面以及第一方面中各种实现方式中的任意一种实现方式中的方法,第二方面的方法具体可以是指第二方面以及第二方面中各种实现方式中的任意一种实现方式中的方法。It should be understood that in the present application, the method of the first aspect may specifically refer to the method in the first aspect and any one of the various implementations of the first aspect, and the method of the second aspect may specifically refer to the method in the second aspect and any one of the various implementations of the second aspect.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本申请实施例提供的系统架构的结构示意图;FIG1 is a schematic diagram of the structure of the system architecture provided in an embodiment of the present application;

图2是利用本申请实施例提供的卷积神经网络模型进行行人再识别的示意图;FIG2 is a schematic diagram of performing pedestrian re-identification using a convolutional neural network model provided in an embodiment of the present application;

图3是本申请实施例提供的一种芯片硬件结构示意图;FIG3 is a schematic diagram of a chip hardware structure provided in an embodiment of the present application;

图4是本申请实施例提供的一种系统架构的示意图;FIG4 is a schematic diagram of a system architecture provided in an embodiment of the present application;

图5是本申请实施例的一种可能的应用场景的示意图;FIG5 is a schematic diagram of a possible application scenario of an embodiment of the present application;

图6是本申请实施例的行人再识别网络的训练方法的总体流程示意图;FIG6 is a schematic diagram of the overall flow of a training method for a pedestrian re-identification network according to an embodiment of the present application;

图7是本申请实施例的行人再识别网络的训练方法的示意性流程图;FIG7 is a schematic flow chart of a method for training a person re-identification network according to an embodiment of the present application;

图8是确定损失函数的函数值的过程的示意图;FIG8 is a schematic diagram of a process for determining a function value of a loss function;

图9是本申请实施例的行人再识别方法的示意性流程图;FIG9 is a schematic flow chart of a pedestrian re-identification method according to an embodiment of the present application;

图10是本申请实施例的行人再识别网络的训练装置的示意性框图;FIG10 is a schematic block diagram of a training device for a person re-identification network according to an embodiment of the present application;

图11是本申请实施例的行人再识别网络的训练装置的示意性框图;FIG11 is a schematic block diagram of a training device for a person re-identification network according to an embodiment of the present application;

图12是本申请实施例的行人再识别装置的示意性框图;FIG12 is a schematic block diagram of a pedestrian re-identification device according to an embodiment of the present application;

图13是本申请实施例的行人再识别装置的示意性框图。FIG. 13 is a schematic block diagram of a pedestrian re-identification device according to an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will describe the technical solutions in the embodiments of the present application in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of this application.

本申请的方案可以应用在城市监控,平安城市等领域。The solution of this application can be applied in the fields of urban monitoring, safe city, etc.

具体地,本申请可以应用在智能监控系统寻人的场景中,下面对该场景下的应用进行介绍。Specifically, the present application can be applied in a scenario where an intelligent monitoring system is used to find a person, and the application in this scenario is introduced below.

智能监控系统寻人:Intelligent monitoring system to find people:

以部署在某园区的智能监控系统为例,该智能监控系统可以采集各个图像拍摄设备下拍摄到的行人的图像,形成图像库。接下来,可以利用图像库中的图像对行人再识别网络(也可以称为行人再识别模型)进行训练,得到训练好的行人再识别网络。Taking the intelligent monitoring system deployed in a certain park as an example, the intelligent monitoring system can collect images of pedestrians captured by various image capture devices to form an image library. Next, the images in the image library can be used to train the pedestrian re-identification network (also called the pedestrian re-identification model) to obtain a trained pedestrian re-identification network.

接下来,就可以利用该训练好的行人再识别网络提取采集到的行人图像的特征向量。当一个人行踪可疑,或者有其他需要跨镜头跟踪该行人的情况时,可以将行人再识别网络采集到的行人图像的特征向量与图像库中图像的特征向量进行对比,并返回特征向量最相似的行人图像,并给出这些图像的拍摄时间、位置等基本信息。再经过后续的核对筛选后,即可完成寻人过程。Next, the trained person re-identification network can be used to extract the feature vector of the collected pedestrian image. When a person's whereabouts are suspicious, or there are other situations where the pedestrian needs to be tracked across lenses, the feature vector of the pedestrian image collected by the pedestrian re-identification network can be compared with the feature vector of the image in the image library, and the pedestrian image with the most similar feature vector can be returned, and basic information such as the shooting time and location of these images can be given. After subsequent verification and screening, the person search process can be completed.

在本申请方案中,行人再识别网络可以是一种神经网络(模型),为了更好地理解本申请方案,下面先对神经网络的相关术语和概念进行介绍。In the present application, the pedestrian re-identification network can be a neural network (model). In order to better understand the present application, the relevant terms and concepts of neural networks are introduced below.

(1)神经网络(1) Neural Network

神经网络可以是由神经单元组成的,神经单元可以是指以xs和截距1为输入的运算单元,该运算单元的输出可以如公式(1)所示:The neural network may be composed of neural units, and the neural unit may refer to an operation unit with x s and intercept 1 as input, and the output of the operation unit may be shown as formula (1):

其中,s=1、2、……n,n为大于1的自然数,Ws为xs的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),该激活函数用于对神经网络中的特征进行非线性变换,从而将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入,激活函数可以是sigmoid函数。神经网络是将多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。Wherein, s=1, 2, ...n, n is a natural number greater than 1, Ws is the weight of xs , and b is the bias of the neural unit. f is the activation function of the neural unit, which is used to perform nonlinear transformation on the features in the neural network, thereby converting the input signal in the neural unit into the output signal. The output signal of the activation function can be used as the input of the next convolutional layer, and the activation function can be a sigmoid function. A neural network is a network formed by connecting multiple single neural units mentioned above, that is, the output of one neural unit can be the input of another neural unit. The input of each neural unit can be connected to the local receptive field of the previous layer to extract the features of the local receptive field. The local receptive field can be an area composed of several neural units.

(2)深度神经网络(2) Deep Neural Networks

深度神经网络(deep neural network,DNN),也称多层神经网络,可以理解为具有多层隐含层的神经网络。按照不同层的位置对DNN进行划分,DNN内部的神经网络可以分为三类:输入层,隐含层,输出层。一般来说第一层是输入层,最后一层是输出层,中间的层数都是隐含层。层与层之间是全连接的,也就是说,第i层的任意一个神经元一定与第i+1层的任意一个神经元相连。A deep neural network (DNN), also known as a multi-layer neural network, can be understood as a neural network with multiple hidden layers. According to the position of different layers, the neural network inside the DNN can be divided into three categories: input layer, hidden layer, and output layer. Generally speaking, the first layer is the input layer, the last layer is the output layer, and the layers in between are all hidden layers. The layers are fully connected, that is, any neuron in the i-th layer must be connected to any neuron in the i+1-th layer.

虽然DNN看起来很复杂,但是就每一层的工作来说,其实并不复杂,简单来说就是如下线性关系表达式:其中,/>是输入向量,/>是输出向量,/>是偏移向量,W是权重矩阵(也称系数),α()是激活函数。每一层仅仅是对输入向量/>经过如此简单的操作得到输出向量/>由于DNN层数多,系数W和偏移向量/>的数量也比较多。这些参数在DNN中的定义如下所述:以系数W为例,假设在一个三层的DNN中,第二层的第4个神经元到第三层的第2个神经元的线性系数定义为/>上标3代表系数W所在的层数,而下标对应的是输出的第三层索引2和输入的第二层索引4。Although DNN looks complicated, the work of each layer is not complicated. In simple terms, it can be expressed as the following linear relationship: Among them,/> is the input vector, /> is the output vector, /> is the offset vector, W is the weight matrix (also called coefficient), and α() is the activation function. Each layer is just an input vector/> After such a simple operation, the output vector is obtained/> Since DNN has many layers, coefficient W and offset vector/> The number of these parameters is also relatively large. The definitions of these parameters in DNN are as follows: Taking the coefficient W as an example, assuming that in a three-layer DNN, the linear coefficient from the 4th neuron in the second layer to the 2nd neuron in the third layer is defined as/> The superscript 3 represents the layer number of coefficient W, while the subscripts correspond to the third layer index 2 of the output and the second layer index 4 of the input.

综上,第L-1层的第k个神经元到第L层的第j个神经元的系数定义为 In summary, the coefficients from the kth neuron in the L-1th layer to the jth neuron in the Lth layer are defined as

需要注意的是,输入层是没有W参数的。在深度神经网络中,更多的隐含层让网络更能够刻画现实世界中的复杂情形。理论上而言,参数越多的模型复杂度越高,“容量”也就越大,也就意味着它能完成更复杂的学习任务。训练深度神经网络的也就是学习权重矩阵的过程,其最终目的是得到训练好的深度神经网络的所有层的权重矩阵(由很多层的向量W形成的权重矩阵)。It should be noted that the input layer does not have a W parameter. In a deep neural network, more hidden layers allow the network to better describe complex situations in the real world. Theoretically, the more parameters a model has, the higher its complexity and the greater its "capacity", which means it can complete more complex learning tasks. Training a deep neural network is the process of learning the weight matrix, and its ultimate goal is to obtain the weight matrix of all layers of the trained deep neural network (a weight matrix formed by many layers of vectors W).

(3)卷积神经网络(3) Convolutional Neural Network

卷积神经网络(convolutional neuron network,CNN)是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征抽取器,该特征抽取器可以看作是滤波器。卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。在卷积神经网络的卷积层中,一个神经元可以只与部分邻层神经元连接。一个卷积层中,通常包含若干个特征平面,每个特征平面可以由一些矩形排列的神经单元组成。同一特征平面的神经单元共享权重,这里共享的权重就是卷积核。共享权重可以理解为提取图像信息的方式与位置无关。卷积核可以以随机大小的矩阵的形式初始化,在卷积神经网络的训练过程中卷积核可以通过学习得到合理的权重。另外,共享权重带来的直接好处是减少卷积神经网络各层之间的连接,同时又降低了过拟合的风险。Convolutional neural network (CNN) is a deep neural network with a convolutional structure. Convolutional neural network contains a feature extractor consisting of a convolution layer and a subsampling layer, which can be regarded as a filter. Convolutional layer refers to the neuron layer in the convolutional neural network that performs convolution processing on the input signal. In the convolutional layer of the convolutional neural network, a neuron can only be connected to some neurons in the adjacent layers. A convolutional layer usually contains several feature planes, each of which can be composed of some rectangularly arranged neural units. The neural units in the same feature plane share weights, and the shared weights here are convolution kernels. Shared weights can be understood as the way to extract image information is independent of position. The convolution kernel can be initialized in the form of a matrix of random size, and the convolution kernel can obtain reasonable weights through learning during the training process of the convolutional neural network. In addition, the direct benefit of shared weights is to reduce the connection between the layers of the convolutional neural network, while reducing the risk of overfitting.

(4)残差网络(4) Residual Network

残差网络是在2015年提出的一种深度卷积网络,相比于传统的卷积神经网络,残差网络更容易优化,并且能够通过增加相当的深度来提高准确率。残差网络的核心是解决了增加深度带来的副作用(退化问题),这样能够通过单纯地增加网络深度,来提高网络性能。残差网络一般会包含很多结构相同的子模块,通常会采用残差网络(residualnetwork,ResNet)连接一个数字表示子模块重复的次数,比如ResNet50表示残差网络中有50个子模块。The residual network is a deep convolutional network proposed in 2015. Compared with traditional convolutional neural networks, the residual network is easier to optimize and can improve accuracy by increasing the depth. The core of the residual network is to solve the side effects (degradation problem) brought by increasing the depth, so that the network performance can be improved by simply increasing the network depth. The residual network generally contains many sub-modules with the same structure. Usually, a residual network (ResNet) is used to connect a number to indicate the number of times the sub-module is repeated. For example, ResNet50 means that there are 50 sub-modules in the residual network.

(6)分类器(6) Classifier

很多神经网络结构最后都有一个分类器,用于对图像中的物体进行分类。分类器一般由全连接层(fully connected layer)和softmax函数(可以称为归一化指数函数)组成,能够根据输入而输出不同类别的概率。Many neural network structures have a classifier at the end to classify objects in the image. The classifier is generally composed of a fully connected layer and a softmax function (which can be called a normalized exponential function), which can output the probability of different categories based on the input.

(7)损失函数(7) Loss function

在训练深度神经网络的过程中,因为希望深度神经网络的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重向量(当然,在第一次更新之前通常会有初始化的过程,即为深度神经网络中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重向量让它预测低一些,不断地调整,直到深度神经网络能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么深度神经网络的训练就变成了尽可能缩小这个loss的过程。In the process of training a deep neural network, because we hope that the output of the deep neural network is as close as possible to the value we really want to predict, we can compare the predicted value of the current network with the target value we really want, and then update the weight vector of each layer of the neural network according to the difference between the two (of course, there is usually an initialization process before the first update, that is, pre-configuring parameters for each layer in the deep neural network). For example, if the predicted value of the network is high, adjust the weight vector to make it predict a lower value, and keep adjusting until the deep neural network can predict the target value we really want or a value very close to the target value we really want. Therefore, it is necessary to pre-define "how to compare the difference between the predicted value and the target value", which is the loss function or objective function, which are important equations used to measure the difference between the predicted value and the target value. Among them, taking the loss function as an example, the higher the output value (loss) of the loss function, the greater the difference, so the training of the deep neural network becomes a process of minimizing this loss as much as possible.

(8)反向传播算法(8) Back propagation algorithm

神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的数值,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。Neural networks can use the error back propagation (BP) algorithm to correct the values of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, the forward transmission of the input signal to the output will generate error loss, and the parameters in the initial neural network model are updated by back propagating the error loss information, so that the error loss converges. The back propagation algorithm is a back propagation movement dominated by error loss, which aims to obtain the optimal parameters of the neural network model, such as the weight matrix.

下面结合图1对本申请实施例的系统架构进行详细的介绍。The system architecture of the embodiment of the present application is described in detail below in conjunction with FIG. 1 .

图1是本申请实施例的系统架构的示意图。如图1所示,系统架构100包括执行设备110、训练设备120、数据库130、客户设备140、数据存储系统150、以及数据采集系统160。FIG1 is a schematic diagram of a system architecture of an embodiment of the present application. As shown in FIG1 , the system architecture 100 includes an execution device 110 , a training device 120 , a database 130 , a client device 140 , a data storage system 150 , and a data acquisition system 160 .

另外,执行设备110包括计算模块111、I/O接口112、预处理模块113和预处理模块114。其中,计算模块111中可以包括目标模型/规则101,预处理模块113和预处理模块114是可选的。In addition, the execution device 110 includes a calculation module 111, an I/O interface 112, a preprocessing module 113 and a preprocessing module 114. The calculation module 111 may include the target model/rule 101, and the preprocessing module 113 and the preprocessing module 114 are optional.

数据采集设备160用于采集训练数据。针对本申请实施例的行人再识别网络的训练方法来说,训练数据可以包括M个训练图像以及该M个训练图像的标注数据。在采集到训练数据之后,数据采集设备160将这些训练数据存入数据库130,训练设备120基于数据库130中维护的训练数据训练得到目标模型/规则101。The data acquisition device 160 is used to collect training data. For the training method of the pedestrian re-identification network of the embodiment of the present application, the training data may include M training images and the annotation data of the M training images. After collecting the training data, the data acquisition device 160 stores the training data in the database 130, and the training device 120 trains the target model/rule 101 based on the training data maintained in the database 130.

下面对训练设备120基于训练数据得到目标模型/规则101进行描述,训练设备120对输入的训练图像进行特征提取,得到训练图像的特征向量,重复对输入的训练图像进行特征提取,直到损失函数的函数值满足预设要求(小于或者等于预设阈值),从而完成目标模型/规则101的训练。The following describes how the training device 120 obtains the target model/rule 101 based on the training data. The training device 120 performs feature extraction on the input training image to obtain a feature vector of the training image, and repeatedly performs feature extraction on the input training image until the function value of the loss function meets the preset requirements (less than or equal to the preset threshold), thereby completing the training of the target model/rule 101.

应理解,上述目标模型/规则101的训练可以是一个无监督的训练。It should be understood that the training of the above target model/rule 101 can be an unsupervised training.

上述目标模型/规则101能够用于实现本申请实施例的行人再识别方法,即,将行人图像(行人图像可以是需要进行行人识别的图像)输入该目标模型/规则101,即可得到对行人图像提取特征向量,并基于提取到的特征向量进行行人识别,确定行人的识别结果。本申请实施例中的目标模型/规则101具体可以为神经网络。需要说明的是,在实际应用中,数据库130中维护的训练数据不一定都来自于数据采集设备160的采集,也有可能是从其他设备接收得到的。另外需要说明的是,训练设备120也不一定完全基于数据库130维护的训练数据进行目标模型/规则101的训练,也有可能从云端或其他地方获取训练数据进行模型训练,上述描述不应该作为对本申请实施例的限定。The above-mentioned target model/rule 101 can be used to implement the pedestrian re-identification method of the embodiment of the present application, that is, by inputting a pedestrian image (the pedestrian image may be an image that needs to be recognized as a pedestrian) into the target model/rule 101, a feature vector extracted from the pedestrian image can be obtained, and pedestrian recognition is performed based on the extracted feature vector to determine the recognition result of the pedestrian. The target model/rule 101 in the embodiment of the present application can specifically be a neural network. It should be noted that in actual applications, the training data maintained in the database 130 does not necessarily all come from the data acquisition device 160, but may also be received from other devices. It should also be noted that the training device 120 does not necessarily train the target model/rule 101 entirely based on the training data maintained by the database 130, and it is also possible to obtain training data from the cloud or other places for model training. The above description should not be used as a limitation on the embodiments of the present application.

根据训练设备120训练得到的目标模型/规则101可以应用于不同的系统或设备中,如应用于图1所示的执行设备110,所述执行设备110可以是终端,如手机终端,平板电脑,笔记本电脑,增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR),车载终端等,还可以是服务器或者云端等。在图1中,执行设备110配置输入/输出(input/output,I/O)接口112,用于与外部设备进行数据交互,用户可以通过客户设备140向I/O接口112输入数据,所述输入数据在本申请实施例中可以包括:客户设备输入的行人图像。这里的客户设备140具体可以是监控设备。The target model/rule 101 obtained by training the training device 120 can be applied to different systems or devices, such as the execution device 110 shown in FIG1 . The execution device 110 can be a terminal, such as a mobile phone terminal, a tablet computer, a laptop computer, an augmented reality (AR)/virtual reality (VR), a vehicle terminal, etc., and can also be a server or a cloud. In FIG1 , the execution device 110 is configured with an input/output (I/O) interface 112 for data interaction with an external device. The user can input data to the I/O interface 112 through the client device 140. The input data in the embodiment of the present application may include: a pedestrian image input by the client device. The client device 140 here can specifically be a monitoring device.

预处理模块113和预处理模块114用于根据I/O接口112接收到的输入数据(如行人图像)进行预处理,在本申请实施例中,可以没有预处理模块113和预处理模块114或者只有的一个预处理模块。当不存在预处理模块113和预处理模块114时,可以直接采用计算模块111对输入数据进行处理。The preprocessing module 113 and the preprocessing module 114 are used to preprocess the input data (such as pedestrian images) received by the I/O interface 112. In the embodiment of the present application, there may be no preprocessing module 113 and the preprocessing module 114 or only one preprocessing module. When there is no preprocessing module 113 and the preprocessing module 114, the computing module 111 may be directly used to process the input data.

在执行设备110对输入数据进行预处理,或者在执行设备110的计算模块111执行计算等相关的处理过程中,执行设备110可以调用数据存储系统150中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储系统150中。When the execution device 110 preprocesses the input data, or when the computing module 111 of the execution device 110 performs calculations and other related processing, the execution device 110 can call the data, code, etc. in the data storage system 150 for corresponding processing, and can also store the data, instructions, etc. obtained from the corresponding processing into the data storage system 150.

最后,I/O接口112将处理结果(具体可以是行人再识别得到的高质量图像),如将目标模型/规则101对行人图像进行行人再识别处理得到的待识别图像的识别结果呈现给客户设备140,从而提供给用户。Finally, the I/O interface 112 presents the processing result (specifically, a high-quality image obtained by pedestrian re-identification), such as the recognition result of the image to be recognized obtained by performing pedestrian re-identification processing on the pedestrian image by the target model/rule 101, to the client device 140, thereby providing it to the user.

具体地,经过计算模块111中的目标模型/规则101进行行人再识别得到的高质量图像可以通过预处理模块113(也可以再加上预处理模块114的处理)的处理(例如,进行图像渲染处理)后将处理结果送入到I/O接口,再由I/O接口将处理结果送入到客户设备140中显示。Specifically, the high-quality image obtained by re-identifying the pedestrian through the target model/rule 101 in the computing module 111 can be processed by the pre-processing module 113 (and can also be processed by the pre-processing module 114) (for example, image rendering processing) and the processing result can be sent to the I/O interface, and then the I/O interface can send the processing result to the client device 140 for display.

应理解,当上述系统架构100中不存在预处理模块113和预处理模块114时,计算模块111还可以将通过行人再识别处理得到的高质量图像传输到I/O接口,然后再由I/O接口将处理结果送入到客户设备140中显示。It should be understood that when the pre-processing module 113 and the pre-processing module 114 do not exist in the above-mentioned system architecture 100, the computing module 111 can also transmit the high-quality image obtained through the pedestrian re-identification processing to the I/O interface, and then the I/O interface sends the processing result to the client device 140 for display.

值得说明的是,训练设备120可以针对不同的目标或称不同的任务(例如,训练设备可以针对不同场景下真实高质量图像和近似低质量图像进行训练),基于不同的训练数据生成相应的目标模型/规则101,该相应的目标模型/规则101即可以用于实现上述目标或完成上述任务,从而为用户提供所需的结果。It is worth noting that the training device 120 can target different goals or different tasks (for example, the training device can be trained on real high-quality images and approximate low-quality images in different scenarios), and generate corresponding target models/rules 101 based on different training data. The corresponding target models/rules 101 can be used to achieve the above goals or complete the above tasks, thereby providing users with the desired results.

值得注意的是,图1仅是本申请实施例提供的一种系统架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在图1中,数据存储系统150相对执行设备110是外部存储器,在其它情况下,也可以将数据存储系统150置于执行设备110中。It is worth noting that Figure 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application. The positional relationship between the devices, components, modules, etc. shown in the figure does not constitute any limitation. For example, in Figure 1, the data storage system 150 is an external memory relative to the execution device 110. In other cases, the data storage system 150 can also be placed in the execution device 110.

如图1所示,根据训练设备120训练得到目标模型/规则101,可以是神经网络(模型)。具体的,该神经网络(模型)可以是CNN以及深度卷积神经网络(deep convolutionalneural networks,DCNN)等等。As shown in Fig. 1, the target model/rule 101 obtained by training with the training device 120 may be a neural network (model). Specifically, the neural network (model) may be a CNN or a deep convolutional neural network (DCNN).

由于CNN是一种非常常见的神经网络,下面结合图2重点对CNN的结构进行详细的介绍。如上文的基础概念介绍所述,卷积神经网络是一种带有卷积结构的深度神经网络,是一种深度学习(deep learning)架构,深度学习架构是指通过机器学习的算法,在不同的抽象层级上进行多个层次的学习。作为一种深度学习架构,CNN是一种前馈(feed-forward)人工神经网络,该前馈人工神经网络中的各个神经元可以对输入其中的图像作出响应。Since CNN is a very common neural network, the following focuses on the detailed introduction of the structure of CNN in conjunction with Figure 2. As mentioned in the basic concept introduction above, convolutional neural network is a deep neural network with a convolution structure and a deep learning architecture. A deep learning architecture refers to multiple levels of learning at different abstract levels through machine learning algorithms. As a deep learning architecture, CNN is a feed-forward artificial neural network in which each neuron can respond to the image input into it.

如图2所示,卷积神经网络(CNN)200可以包括输入层210,卷积层/池化层220(其中池化层为可选的),以及全连接层(fully connected layer)230。下面对这些层的相关内容做详细介绍。As shown in Fig. 2, a convolutional neural network (CNN) 200 may include an input layer 210, a convolutional layer/pooling layer 220 (wherein the pooling layer is optional), and a fully connected layer 230. The relevant contents of these layers are described in detail below.

卷积层/池化层220:Convolutional layer/pooling layer 220:

卷积层:Convolutional Layer:

如图2所示卷积层/池化层220可以包括如示例221-226层,举例来说:在一种实现中,221层为卷积层,222层为池化层,223层为卷积层,224层为池化层,225为卷积层,226为池化层;在另一种实现方式中,221、222为卷积层,223为池化层,224、225为卷积层,226为池化层。即卷积层的输出可以作为随后的池化层的输入,也可以作为另一个卷积层的输入以继续进行卷积操作。As shown in FIG2 , the convolution layer/pooling layer 220 may include layers 221-226, for example: in one implementation, layer 221 is a convolution layer, layer 222 is a pooling layer, layer 223 is a convolution layer, layer 224 is a pooling layer, layer 225 is a convolution layer, and layer 226 is a pooling layer; in another implementation, layers 221 and 222 are convolution layers, layer 223 is a pooling layer, layers 224 and 225 are convolution layers, and layer 226 is a pooling layer. That is, the output of a convolution layer can be used as the input of a subsequent pooling layer, or as the input of another convolution layer to continue the convolution operation.

下面将以卷积层221为例,介绍一层卷积层的内部工作原理。The following will take the convolution layer 221 as an example to introduce the internal working principle of a convolution layer.

卷积层221可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素……这取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关,需要注意的是,权重矩阵的纵深维度(depth dimension)和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用多个尺寸(行×列)相同的权重矩阵,即多个同型矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度,这里的维度可以理解为由上面所述的“多个”来决定。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化等。该多个权重矩阵尺寸(行×列)相同,经过该多个尺寸相同的权重矩阵提取后的卷积特征图的尺寸也相同,再将提取到的多个尺寸相同的卷积特征图合并形成卷积运算的输出。The convolution layer 221 may include a plurality of convolution operators, which are also called kernels. The convolution operator is equivalent to a filter that extracts specific information from the input image matrix in image processing. The convolution operator can be essentially a weight matrix, which is usually predefined. In the process of performing convolution operations on the image, the weight matrix is usually processed one pixel after another (or two pixels after two pixels... depending on the value of the stride) in the horizontal direction on the input image, thereby completing the work of extracting specific features from the image. The size of the weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix is the same as the depth dimension of the input image. In the process of performing convolution operations, the weight matrix will extend to the entire depth of the input image. Therefore, convolution with a single weight matrix will produce a convolution output with a single depth dimension, but in most cases, a single weight matrix is not used, but multiple weight matrices of the same size (row × column), that is, multiple isotype matrices, are applied. The output of each weight matrix is stacked to form the depth dimension of the convolution image, and the dimension here can be understood as being determined by the "multiple" mentioned above. Different weight matrices can be used to extract different features in the image, for example, one weight matrix is used to extract image edge information, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to blur unnecessary noise points in the image, etc. The multiple weight matrices have the same size (rows × columns), and the convolution feature maps extracted by the multiple weight matrices of the same size are also the same size. The extracted multiple convolution feature maps of the same size are then merged to form the output of the convolution operation.

这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以用来从输入图像中提取信息,从而使得卷积神经网络200进行正确的预测。The weight values in these weight matrices need to be obtained through a lot of training in practical applications. The weight matrices formed by the weight values obtained through training can be used to extract information from the input image, so that the convolutional neural network 200 can make correct predictions.

当卷积神经网络200有多个卷积层的时候,初始的卷积层(例如221)往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络200深度的加深,越往后的卷积层(例如226)提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。When the convolutional neural network 200 has multiple convolutional layers, the initial convolutional layer (for example, 221) often extracts more general features, which can also be called low-level features. As the depth of the convolutional neural network 200 increases, the features extracted by the later convolutional layers (for example, 226) become more and more complex, such as high-level semantic features. Features with higher semantics are more suitable for the problem to be solved.

池化层:Pooling layer:

由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,在如图2中220所示例的221-226各层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像尺寸相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。Since it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolution layer. In each layer 221-226 as shown in 220 in FIG. 2, a convolution layer may be followed by a pooling layer, or multiple convolution layers may be followed by one or more pooling layers. In the image processing process, the only purpose of the pooling layer is to reduce the spatial size of the image. The pooling layer may include an average pooling operator and/or a maximum pooling operator to sample the input image to obtain an image of smaller size. The average pooling operator may calculate the pixel values in the image within a specific range to generate an average value as the result of average pooling. The maximum pooling operator may take the pixel with the largest value in the range within a specific range as the result of maximum pooling. In addition, just as the size of the weight matrix used in the convolution layer should be related to the image size, the operator in the pooling layer should also be related to the image size. The size of the image output after processing by the pooling layer may be smaller than the size of the image input to the pooling layer, and each pixel in the image output by the pooling layer represents the average value or maximum value of the corresponding sub-region of the image input to the pooling layer.

全连接层230:Fully connected layer 230:

在经过卷积层/池化层220的处理后,卷积神经网络200还不足以输出所需要的输出信息。因为如前所述,卷积层/池化层220只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或其他相关信息),卷积神经网络200需要利用全连接层230来生成一个或者一组所需要的类的数量的输出。因此,在全连接层230中可以包括多层隐含层(如图2所示的231、232至23n)以及输出层240,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。After being processed by the convolution layer/pooling layer 220, the convolution neural network 200 is not sufficient to output the required output information. Because as mentioned above, the convolution layer/pooling layer 220 will only extract features and reduce the parameters brought by the input image. However, in order to generate the final output information (the required class information or other related information), the convolution neural network 200 needs to use the fully connected layer 230 to generate one or a group of outputs of the required number of classes. Therefore, the fully connected layer 230 may include multiple hidden layers (231, 232 to 23n as shown in Figure 2) and an output layer 240. The parameters contained in the multiple hidden layers can be pre-trained according to the relevant training data of the specific task type. For example, the task type may include image recognition, image classification, image super-resolution reconstruction, etc.

在全连接层230中的多层隐含层之后,也就是整个卷积神经网络200的最后层为输出层240,该输出层240具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络200的前向传播(如图2由210至240方向的传播为前向传播)完成,反向传播(如图2由240至210方向的传播为反向传播)就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络200的损失,及卷积神经网络200通过输出层输出的结果和理想结果之间的误差。After the multiple hidden layers in the fully connected layer 230, that is, the last layer of the entire convolutional neural network 200 is the output layer 240, which has a loss function similar to the classification cross entropy, which is specifically used to calculate the prediction error. Once the forward propagation of the entire convolutional neural network 200 (as shown in FIG. 2, the propagation from 210 to 240 is the forward propagation) is completed, the back propagation (as shown in FIG. 2, the propagation from 240 to 210 is the back propagation) will begin to update the weight values and biases of the aforementioned layers to reduce the loss of the convolutional neural network 200 and the error between the result output by the convolutional neural network 200 through the output layer and the ideal result.

需要说明的是,如图2所示的卷积神经网络200仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在。It should be noted that the convolutional neural network 200 shown in FIG. 2 is only an example of a convolutional neural network. In specific applications, the convolutional neural network may also exist in the form of other network models.

应理解,可以采用图2所示的卷积神经网络(CNN)200执行本申请实施例的行人再识别方法,如图2所示,行人图像经过输入层210、卷积层/池化层220和全连接层230的处理之后可以待识别图像的图像特征,后续可以根据待识别图像的图像特征再获取到待识别图像的识别结果。It should be understood that the convolutional neural network (CNN) 200 shown in Figure 2 can be used to perform the pedestrian re-identification method of the embodiment of the present application. As shown in Figure 2, after the pedestrian image is processed by the input layer 210, the convolution layer/pooling layer 220 and the fully connected layer 230, the image features of the image to be identified can be obtained, and subsequently the recognition result of the image to be identified can be obtained based on the image features of the image to be identified.

图3为本申请实施例提供的一种芯片硬件结构,该芯片包括神经网络处理器50。该芯片可以被设置在如图1所示的执行设备110中,用以完成计算模块111的计算工作。该芯片也可以被设置在如图1所示的训练设备120中,用以完成训练设备120的训练工作并输出目标模型/规则101。如图2所示的卷积神经网络中各层的算法均可在如图3所示的芯片中得以实现。FIG3 is a chip hardware structure provided in an embodiment of the present application, and the chip includes a neural network processor 50. The chip can be set in the execution device 110 as shown in FIG1 to complete the calculation work of the calculation module 111. The chip can also be set in the training device 120 as shown in FIG1 to complete the training work of the training device 120 and output the target model/rule 101. The algorithms of each layer in the convolutional neural network shown in FIG2 can be implemented in the chip shown in FIG3.

神经网络处理器(neural-network processing unit,NPU)50作为协处理器挂载到主中央处理器(central processing unit,CPU)(host CPU)上,由主CPU分配任务。NPU的核心部分为运算电路503,控制器504控制运算电路503提取存储器(权重存储器或输入存储器)中的数据并进行运算。The neural-network processing unit (NPU) 50 is mounted on the host central processing unit (CPU) as a coprocessor, and the host CPU assigns tasks. The core part of the NPU is the operation circuit 503, and the controller 504 controls the operation circuit 503 to extract data from the memory (weight memory or input memory) and perform operations.

在一些实现中,运算电路503内部包括多个处理单元(process engine,PE)。在一些实现中,运算电路503是二维脉动阵列。运算电路503还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路503是通用的矩阵处理器。In some implementations, the operation circuit 503 includes multiple processing units (process engines, PEs) inside. In some implementations, the operation circuit 503 is a two-dimensional systolic array. The operation circuit 503 can also be a one-dimensional systolic array or other electronic circuits capable of performing mathematical operations such as multiplication and addition. In some implementations, the operation circuit 503 is a general-purpose matrix processor.

举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路503从权重存储器502中取矩阵B相应的数据,并缓存在运算电路503中每一个PE上。运算电路503从输入存储器501中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)508中。For example, assume there is an input matrix A, a weight matrix B, and an output matrix C. The operation circuit 503 takes the corresponding data of the matrix B from the weight memory 502 and caches it on each PE in the operation circuit 503. The operation circuit 503 takes the matrix A data from the input memory 501 and performs a matrix operation with the matrix B, and the partial result or the final result of the matrix is stored in the accumulator 508.

向量计算单元507可以对运算电路503的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。例如,向量计算单元507可以用于神经网络中非卷积/非FC层的网络计算,如池化(pooling),批归一化(batch normalization),局部响应归一化(local response normalization)等。The vector calculation unit 507 can further process the output of the operation circuit 503, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, etc. For example, the vector calculation unit 507 can be used for network calculations of non-convolutional/non-FC layers in a neural network, such as pooling, batch normalization, local response normalization, etc.

在一些实现中,向量计算单元能507将经处理的输出的向量存储到统一缓存器506。例如,向量计算单元507可以将非线性函数应用到运算电路503的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元507生成归一化的值、合并值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路503的激活输入,例如用于在神经网络中的后续层中的使用。In some implementations, the vector calculation unit can 507 store the processed output vector to the unified buffer 506. For example, the vector calculation unit 507 can apply a nonlinear function to the output of the operation circuit 503, such as a vector of accumulated values, to generate an activation value. In some implementations, the vector calculation unit 507 generates a normalized value, a merged value, or both. In some implementations, the processed output vector can be used as an activation input to the operation circuit 503, such as for use in a subsequent layer in a neural network.

统一存储器506用于存放输入数据以及输出数据。The unified memory 506 is used to store input data and output data.

权重数据直接通过存储单元访问控制器505(direct memory accesscontroller,DMAC)将外部存储器中的输入数据搬运到输入存储器501和/或统一存储器506、将外部存储器中的权重数据存入权重存储器502,以及将统一存储器506中的数据存入外部存储器。The weight data is directly transferred from the external memory to the input memory 501 and/or the unified memory 506 through the direct memory access controller 505 (DMAC), the weight data in the external memory is stored in the weight memory 502, and the data in the unified memory 506 is stored in the external memory.

总线接口单元(bus interface unit,BIU)510,用于通过总线实现主CPU、DMAC和取指存储器509之间进行交互。The bus interface unit (BIU) 510 is used to implement the interaction between the main CPU, DMAC and instruction fetch memory 509 through the bus.

与控制器504连接的取指存储器(instruction fetch buffer)509,用于存储控制器504使用的指令;An instruction fetch buffer 509 connected to the controller 504 and used to store instructions used by the controller 504;

控制器504,用于调用指存储器509中缓存的指令,实现控制该运算加速器的工作过程。The controller 504 is used to call the instructions cached in the memory 509 to control the working process of the computing accelerator.

一般地,统一存储器506,输入存储器501,权重存储器502以及取指存储器509均为片上(on-chip)存储器,外部存储器为该NPU外部的存储器,该外部存储器可以为双倍数据率同步动态随机存储器(double data rate synchronous dynamic random accessmemory,简称DDR SDRAM)、高带宽存储器(high bandwidth memory,HBM)或其他可读可写的存储器。Generally, the unified memory 506, the input memory 501, the weight memory 502 and the instruction fetch memory 509 are all on-chip memories, and the external memory is a memory outside the NPU, which can be a double data rate synchronous dynamic random access memory (DDR SDRAM), a high bandwidth memory (HBM) or other readable and writable memory.

另外,在本申请中,图2所示的卷积神经网络中各层的运算可以由运算电路503或向量计算单元507执行。In addition, in the present application, the operations of each layer in the convolutional neural network shown in Figure 2 can be performed by the operation circuit 503 or the vector calculation unit 507.

如图4所示,本申请实施例提供了一种系统架构300。该系统架构包括本地设备301、本地设备302以及执行设备210和数据存储系统250,其中,本地设备301和本地设备302通过通信网络与执行设备210连接。As shown in Fig. 4, the embodiment of the present application provides a system architecture 300. The system architecture includes a local device 301, a local device 302, an execution device 210 and a data storage system 250, wherein the local device 301 and the local device 302 are connected to the execution device 210 via a communication network.

执行设备210可以由一个或多个服务器实现。可选的,执行设备210可以与其它计算设备配合使用,例如:数据存储器、路由器、负载均衡器等设备。执行设备210可以布置在一个物理站点上,或者分布在多个物理站点上。执行设备210可以使用数据存储系统250中的数据,或者调用数据存储系统250中的程序代码来实现本申请实施例的行人再识别方法。The execution device 210 can be implemented by one or more servers. Optionally, the execution device 210 can be used in conjunction with other computing devices, such as data storage devices, routers, load balancers, and other devices. The execution device 210 can be arranged at one physical site, or distributed at multiple physical sites. The execution device 210 can use the data in the data storage system 250, or call the program code in the data storage system 250 to implement the pedestrian re-identification method of the embodiment of the present application.

用户可以操作各自的用户设备(例如本地设备301和本地设备302)与执行设备210进行交互。每个本地设备可以表示任何计算设备,例如个人计算机、计算机工作站、智能手机、平板电脑、智能摄像头、智能汽车或其他类型蜂窝电话、媒体消费设备、可穿戴设备、机顶盒、游戏机等。Users can operate their respective user devices (e.g., local device 301 and local device 302) to interact with execution device 210. Each local device can represent any computing device, such as a personal computer, a computer workstation, a smart phone, a tablet computer, a smart camera, a smart car or other type of cellular phone, a media consumption device, a wearable device, a set-top box, a game console, etc.

每个用户的本地设备可以通过任何通信机制/通信标准的通信网络与执行设备210进行交互,通信网络可以是广域网、局域网、点对点连接等方式,或它们的任意组合。The local device of each user can interact with the execution device 210 through a communication network of any communication mechanism/communication standard. The communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.

在一种实现方式中,本地设备301、本地设备302从执行设备210获取到目标神经网络的相关参数,将目标神经网络部署在本地设备301、本地设备302上,利用该目标神经网络进行行人再识别。In one implementation, the local device 301 and the local device 302 obtain relevant parameters of the target neural network from the execution device 210, deploy the target neural network on the local device 301 and the local device 302, and use the target neural network to perform pedestrian re-identification.

在另一种实现中,执行设备210上可以直接部署目标神经网络,执行设备210通过从本地设备301和本地设备302获取行人图像(本地设备301和本地设备302可以将图行人图像上传给执行设备210),并根据目标神经网络对行人图像进行行人再识别,并将行人再识别得到的高质量图像发送给本地设备301和本地设备302。In another implementation, the target neural network can be directly deployed on the execution device 210, and the execution device 210 obtains pedestrian images from the local device 301 and the local device 302 (the local device 301 and the local device 302 can upload the pedestrian images to the execution device 210), and performs pedestrian re-identification on the pedestrian images according to the target neural network, and sends the high-quality images obtained by pedestrian re-identification to the local device 301 and the local device 302.

上述执行设备210也可以称为云端设备,此时执行设备210一般部署在云端。The execution device 210 may also be referred to as a cloud device. In this case, the execution device 210 is generally deployed in the cloud.

图5是本申请实施例的一种可能的应用场景的示意图。FIG. 5 is a schematic diagram of a possible application scenario of an embodiment of the present application.

如图5所示,在本申请中可以通过单图像拍摄设备标注数据对行人再识别网络进行训练,得到训练好的行人再识别网络,该训练好的行人再识别网络可以对行人图像进行处理,得到行人图像的特征向量,接下来,通过将该行人图像的特征向量与图像库中的特征向量进行特征比对,就可以得到要寻找的人。具体地,通过特征比对可以寻找到与行人图像的特征向量最相似的目标行人图像,并输出目标行人图像的拍摄时间、位置等基本信息。As shown in FIG5 , in the present application, the pedestrian re-identification network can be trained by annotating data of a single image shooting device to obtain a trained pedestrian re-identification network, which can process pedestrian images to obtain feature vectors of pedestrian images. Next, by performing feature comparison between the feature vectors of the pedestrian image and feature vectors in the image library, the person to be found can be obtained. Specifically, the target pedestrian image that is most similar to the feature vector of the pedestrian image can be found through feature comparison, and basic information such as the shooting time and location of the target pedestrian image can be output.

应理解,图像库中保存的有各个行人图像的特征向量以及行人图像对应的行人的相关信息。It should be understood that the image library stores the feature vectors of each pedestrian image and the relevant information of the pedestrian corresponding to the pedestrian image.

应理解,这里的单图像拍摄设备标注数据可以包括多个训练图像和多个训练图像的标注数据。单图像拍摄设备标注数据是指针对每个图像拍摄设备获取到的训练图像进行单独标注,而不需要在不同的图像拍摄设备之间寻找是否出现了相同的行人,这种标注方式不用关注不同图像拍摄设备拍摄到的训练图像之间的关系,可以节省大量的标记时间,减少标注的复杂度。上述多个训练图像和多个训练图像的标注数据也可以统称为训练数据。It should be understood that the single image capture device annotation data here may include multiple training images and annotation data of multiple training images. Single image capture device annotation data refers to individual annotation for each training image acquired by the image capture device, without the need to search for the same pedestrian between different image capture devices. This annotation method does not need to pay attention to the relationship between the training images captured by different image capture devices, which can save a lot of labeling time and reduce the complexity of annotation. The above-mentioned multiple training images and the annotation data of multiple training images can also be collectively referred to as training data.

图6是本申请实施例的行人再识别网络的训练方法的总体流程示意图。FIG6 is a schematic diagram of the overall flow of a training method for a person re-identification network according to an embodiment of the present application.

如图6所示,通过对每个图像拍摄设备获取到的视频图像进行单独的数据标注,可以得到单图像拍摄设备标注数据。单图像拍摄设备标注数据的最大优点就是易于标注和收集,在本申请中,单图像拍摄设备标注数据并不要求同一个行人在多个图像拍摄设备下出现。As shown in Figure 6, by individually annotating the video images acquired by each image capture device, single image capture device annotated data can be obtained. The biggest advantage of single image capture device annotated data is that it is easy to annotate and collect. In this application, single image capture device annotated data does not require the same pedestrian to appear under multiple image capture devices.

在单图像拍摄设备标注数据中,假设每个行人只在一个图像拍摄设备(或一个图像拍摄设备组)中出现,这样,利用行人检测跟踪在视频中得到其行人图像后,只需要很少的人力就可以将相近的帧中同一个人的几张图关联起来,形成标注。而且每个图像拍摄设备的标注是相对独立的,不同的图像拍摄设备下行人编号不会有重叠。通过为不同图像拍摄设备设置不同的采集时间段,可以使得每个图像拍摄设备采集的视频中重复出现的人数很少,从而达成单图像拍摄设备标注数据的要求。In the single image capture device annotation data, it is assumed that each pedestrian only appears in one image capture device (or one image capture device group). In this way, after obtaining the pedestrian image in the video using pedestrian detection and tracking, only a small amount of manpower is needed to associate several images of the same person in similar frames to form annotations. Moreover, the annotations of each image capture device are relatively independent, and there will be no overlap in the pedestrian numbers on different image capture devices. By setting different acquisition time periods for different image capture devices, the number of people who appear repeatedly in the video captured by each image capture device can be reduced, thereby meeting the requirements of single image capture device annotation data.

在某些比较小的场景(例如,一个办公园区)中,大多数人本身活动范围小,相当多的人只在某一个图像拍摄设备组出现,这样的数据能够天然满足这项要求。由于一个图像拍摄设备组中的相机视野相近或有重叠,光照条件也相似,这些相机基本可以等效于一个摄相机。In some relatively small scenes (for example, an office park), most people have a small range of activities, and quite a few people only appear in a certain image capture device group. Such data can naturally meet this requirement. Since the cameras in an image capture device group have similar or overlapping fields of view and similar lighting conditions, these cameras can basically be equivalent to one camera.

在得到单图像拍摄设备标注数据之后就可以利用该单图像拍摄设备标注数据进行对行人再识别网络(模型)进行训练,训练得到的行人再识别网络就可以用于测试和部署了。具体地,训练得到的行人再识别网络就可以用于执行本申请实施例的行人再识别方法。After obtaining the single image shooting device annotated data, the single image shooting device annotated data can be used to train the pedestrian re-identification network (model), and the trained pedestrian re-identification network can be used for testing and deployment. Specifically, the trained pedestrian re-identification network can be used to execute the pedestrian re-identification method of the embodiment of the present application.

图7是本申请实施例的行人再识别网络的训练方法的示意性流程图。图7所示的方法可以由本申请实施例的行人再识别网络的训练装置来执行(例如,可以由图10和图11所示的装置来执行),图7所示的方法包括步骤1001至1008,下面对这些步骤进行详细的介绍。FIG7 is a schematic flow chart of a training method for a person re-identification network according to an embodiment of the present application. The method shown in FIG7 can be performed by a training device for a person re-identification network according to an embodiment of the present application (for example, it can be performed by the devices shown in FIG10 and FIG11 ). The method shown in FIG7 includes steps 1001 to 1008, which are described in detail below.

1001、开始。1001. Start.

步骤1001表示开始行人再识别网络的训练过程。Step 1001 indicates starting the training process of the person re-identification network.

1002、获取训练数据。1002. Obtain training data.

上述步骤1002中的训练数据包括M(M为大于1的整数)个训练图像以及M个训练图像的标注数据,其中,在M个训练图像中,每个训练图像包括行人,每个训练图像的标注数据包括每个训练图像中的行人所在的包围框和行人标识信息,不同的行人对应不同的行人标识信息,在M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备。The training data in the above step 1002 includes M (M is an integer greater than 1) training images and annotation data of the M training images, wherein, in the M training images, each training image includes a pedestrian, and the annotation data of each training image includes a bounding box and pedestrian identification information of the pedestrian in each training image, different pedestrians correspond to different pedestrian identification information, and in the M training images, the training images with the same pedestrian identification information come from the same image capture device.

上述图像拍摄设备具体可以是摄像机、照相机等能够获取行人图像的设备。The above-mentioned image capturing device may specifically be a device such as a video camera or a still camera that can capture images of pedestrians.

上述步骤1002中的行人标识信息也可以称为行人身份标识信息,是用于表示标识行人身份的一种信息,每个行人可以对应唯一的行人标识信息,该行人标识信息的表示方式有多种,只要能够指示行人的身份信息即可,例如,该行人标识信息具体可以是行人身份(identity,ID),也就是说,可以为每一个行人分配一个唯一的ID。The pedestrian identification information in the above step 1002 can also be called pedestrian identity identification information, which is a kind of information used to indicate the identity of the pedestrian. Each pedestrian can correspond to unique pedestrian identification information. There are many ways to represent the pedestrian identification information, as long as it can indicate the identity information of the pedestrian. For example, the pedestrian identification information can specifically be the pedestrian identity (identity, ID), that is, a unique ID can be assigned to each pedestrian.

1003、对行人再识别网络的网络参数进行初始化处理,以得到行人再识别网络的网络参数的初始值。1003. Initialize the network parameters of the pedestrian re-identification network to obtain initial values of the network parameters of the pedestrian re-identification network.

上述步骤1003中可以随机设置行人再识别网络的网络参数,得到行人再识别网络的网络参数的初始值。In the above step 1003, the network parameters of the pedestrian re-identification network can be randomly set to obtain the initial values of the network parameters of the pedestrian re-identification network.

1004、将M个训练图像中的一批训练图像输入到行人再识别网络进行特征提取,得到一批训练图像中的每个训练图像的特征向量。1004. Input a batch of training images from the M training images into a person re-identification network for feature extraction to obtain a feature vector for each training image in the batch of training images.

上述一批训练图像是M个训练图像中的部分训练图像,在采用M个训练图像对行人再识别网络进行训练时,可以将M个训练图像分成不同的批次对行人再识别网络进行训练,每个批次的训练图像的数目可以相同也可以不同。The above-mentioned batch of training images is part of the M training images. When the M training images are used to train the pedestrian re-recognition network, the M training images can be divided into different batches to train the pedestrian re-recognition network. The number of training images in each batch can be the same or different.

例如,共有5000个训练图像,可以在每个批次输入100个训练图像对行人再识别网络进行训练。For example, if there are 5,000 training images in total, 100 training images can be input in each batch to train the pedestrian re-identification network.

上述一批训练图像可以包括N个锚点图像,其中,该N个锚点图像是上述一批训练图像中的任意N个训练图像,该N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像,N为正整数,并且N小于M。The above-mentioned batch of training images may include N anchor images, wherein the N anchor images are any N training images in the above-mentioned batch of training images, each anchor image in the N anchor images corresponds to a most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image, N is a positive integer, and N is less than M.

下面对每个锚点图像对应的最难正样本图像,第一最难负样本图像和第二最难负样本图像进行说明。The most difficult positive sample image, the first most difficult negative sample image and the second most difficult negative sample image corresponding to each anchor point image are described below.

每个锚点图像对应的最难正样本图像:上述一批训练图像中与每个锚点图像的行人标识信息相同,并且与每个锚点图像的特征向量之间的距离最远的训练图像;The most difficult positive sample image corresponding to each anchor image: the training image in the above batch of training images that has the same pedestrian identification information as each anchor image and has the farthest distance from the feature vector of each anchor image;

每个锚点图像对应的第一最难负样本图像:上述一批训练图像中与每个锚点图像来自于同一图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像;The first most difficult negative sample image corresponding to each anchor image: a training image in the above batch of training images that comes from the same image capture device as each anchor image, has different pedestrian identification information from each anchor image, and has the closest distance to the feature vector of each anchor image;

每个锚点图像对应的第二最难负样本图像:上述一批训练图像中与每个锚点图像来自不同图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像。The second most difficult negative sample image corresponding to each anchor image: a training image in the above batch of training images that comes from a different image capture device than each anchor image, has different pedestrian identification information from each anchor image, and has the closest distance to the feature vector of each anchor image.

1005、根据上述一批训练图像的特征向量确定损失函数的函数值。1005. Determine a function value of a loss function according to the feature vectors of the above batch of training images.

上述步骤1005中的损失函数的函数值是N个第一损失函数的函数值经过平均处理得到的。The function value of the loss function in the above step 1005 is obtained by averaging the function values of the N first loss functions.

其中,上述N个第一损失函数中的每个第一损失函数的函数值是根据N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的。The function value of each of the N first loss functions is calculated based on the first difference and the second difference corresponding to each of the N anchor point images.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值和第二差值的和。Optionally, the function value of each of the first loss functions is the sum of the first difference and the second difference corresponding to each anchor point image.

上述N为正整数,上述N小于M。当N=1时,只有一个第一损失函数的函数值,此时可以直接将该第一损失函数的函数值作为步骤1005中的损失函数的函数值。The above N is a positive integer, and the above N is less than M. When N=1, there is only one function value of the first loss function, and at this time, the function value of the first loss function can be directly used as the function value of the loss function in step 1005 .

例如,第一损失函数的函数值可以如公式(2)所示。For example, the function value of the first loss function may be as shown in formula (2).

L1=D1+D2 (2)L1=D1+D2 (2)

其中,L1表示第一损失函数的函数值,D1表示上述第一差值,D2表示上述第二差值。Among them, L1 represents the function value of the first loss function, D1 represents the above-mentioned first difference, and D2 represents the above-mentioned second difference.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值的绝对值和第二差值绝对值的和。Optionally, the function value of each of the above-mentioned first loss functions is the sum of the absolute value of the first difference and the absolute value of the second difference corresponding to each anchor point image.

例如,第一损失函数的函数值可以如公式(3)所示。For example, the function value of the first loss function may be as shown in formula (3).

L1=|D1|+|D2| (3)L1=|D1|+|D2| (3)

其中,L1表示第一损失函数的函数值,|D1|表示上述第一差值的绝对值,|D2|表示上述第二差值的绝对值。Wherein, L1 represents the function value of the first loss function, |D1| represents the absolute value of the first difference, and |D2| represents the absolute value of the second difference.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值、第二差值和其他常数项的和。Optionally, the function value of each of the above-mentioned first loss functions is the sum of the first difference, the second difference and other constant terms corresponding to each anchor point image.

例如,第一损失函数的函数值可以如公式(4)所示。For example, the function value of the first loss function may be as shown in formula (4).

L1=D1+D2+m (4)L1=D1+D2+m (4)

其中,L1表示第一损失函数的函数值,D1表示上述第一差值,D2表示上述第二差值,m表示常数,m的大小可以根据经验来设置合适的数值。Among them, L1 represents the function value of the first loss function, D1 represents the above-mentioned first difference, D2 represents the above-mentioned second difference, m represents a constant, and the size of m can be set to a suitable value based on experience.

再如,第一损失函数的函数值可以如公式(5)所示。For another example, the function value of the first loss function can be shown as formula (5).

L1=|m1+D1|+|m2+D2| (5)L1=| m1 +D1|+| m2 +D2| (5)

其中,L1表示第一损失函数的函数值,|D1|表示上述第一差值,|D2|表示上述第二差值,m1和m2表示常数,m1和m2的大小可以根据经验来设置合适的数值。Among them, L1 represents the function value of the first loss function, |D1| represents the above-mentioned first difference, |D2| represents the above-mentioned second difference, m1 and m2 represent constants, and the sizes of m1 and m2 can be set to appropriate values based on experience.

应理解,上文中在计算第一损失函数的函数值时对D1和D2求绝对值只是一种可选的实现方式,实际上在确定第一损失函数的函数值时还可以对D1和D2进行其他操作,例如,可以对D1和D2进行[X]+操作(该操作可以称为对函数值取正部的操作)。It should be understood that in the above text, obtaining the absolute values of D1 and D2 when calculating the function value of the first loss function is only an optional implementation method. In fact, other operations can be performed on D1 and D2 when determining the function value of the first loss function. For example, the [X] + operation can be performed on D1 and D2 (this operation can be called an operation of taking the positive part of the function value).

其中,当X大于0时,[X]+=X,而当X小于0时,[X]+=0。(具体可以参见https://en.wikipedia.org/wiki/Positive_and_negative_parts)When X is greater than 0, [X] + = X, and when X is less than 0, [X] + = 0. (For details, please refer to https://en.wikipedia.org/wiki/Positive_and_negative_parts)

下面对第一差值和第二差值的含义进行说明。The meanings of the first difference and the second difference are explained below.

每个锚点图像对应的第一差值:每个锚点图像对应的最难正样本距离与每个锚点图像对应的第二最难负样本距离的差;The first difference value corresponding to each anchor image: the difference between the most difficult positive sample distance corresponding to each anchor image and the second most difficult negative sample distance corresponding to each anchor image;

每个锚点图像对应的第二差值:每个锚点图像对应的第二最难负样本距离与每个锚点图像对应的第一最难负样本距离的差。The second difference value corresponding to each anchor image: the difference between the second most difficult negative sample distance corresponding to each anchor image and the first most difficult negative sample distance corresponding to each anchor image.

上述第一差值和第二差值是对不同的距离进行做差得到的。下面对这些距离的含义进行说明。The first difference and the second difference are obtained by subtracting different distances. The meanings of these distances are explained below.

每个锚点图像对应的最难正样本距离:每个锚点图像对应的最难正样本图像的特征向量与每个锚点图像的特征向量的距离;The most difficult positive sample distance corresponding to each anchor image: the distance between the feature vector of the most difficult positive sample image corresponding to each anchor image and the feature vector of each anchor image;

每个锚点图像对应的第二最难负样本距离:每个锚点图像对应的第二最难负样本图像的特征向量与每个锚点图像的特征向量的距离;The second most difficult negative sample distance corresponding to each anchor image: the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor image and the feature vector of each anchor image;

每个锚点图像对应的第一最难负样本距离:每个锚点图像对应的第一最难负样本图像的特征向量与每个锚点图像的特征向量的距离。The first hardest negative sample distance corresponding to each anchor image: the distance between the feature vector of the first hardest negative sample image corresponding to each anchor image and the feature vector of each anchor image.

具体地,假设训练过程中一个批次的图像对应的图像拍摄设备数为C,每个图像拍摄设备下的行人数为P,每个行人的图像数为K,那么,一个批次的图像数就是C×P×K。记该批次的图像中的锚点图像为记/>为网络模型输出的特征(向量),记||f1-f2||为两个特征f1和f2的欧式距离,那么,上述最难正样本距离可以如公式(6)所示。Specifically, assuming that the number of image capture devices corresponding to a batch of images during training is C, the number of pedestrians under each image capture device is P, and the number of images of each pedestrian is K, then the number of images in a batch is C×P×K. The anchor images in this batch are denoted as Note/> is the feature (vector) output by the network model, let ||f 1 -f 2 || be the Euclidean distance between the two features f 1 and f 2 , then the above-mentioned most difficult positive sample distance can be shown as formula (6).

上述第二最难负样本距离可以如公式(7)所示。The second most difficult negative sample distance can be expressed as formula (7).

上述第一最难负样本距离可以如公式(8)所示。The first most difficult negative sample distance can be expressed as formula (8).

上述第一差值可以是公式(6)与公式(7)的差,上述第二差值可以是公式(7)与公式(8)的差。The first difference may be the difference between formula (6) and formula (7), and the second difference may be the difference between formula (7) and formula (8).

由上述第一差值和第二差值构成的损失函数可以如公式(9)所示。其中,L表示损失函数,m1和m2是两个常量,具体的取值可以根据经验来设置。例如,m1=0.1,m2=0.1。The loss function composed of the first difference and the second difference can be shown as formula (9). Wherein, L represents the loss function, m1 and m2 are two constants, and the specific values can be set according to experience. For example, m1 = 0.1, m2 = 0.1.

上述公式(9)中,表示对/>进行[X]+操作,当/>的取值大于或者等于0时,/>的取值就是当/>的取值小于0时,/>的取值就是0。In the above formula (9), Expressing support for/> Perform [X] + operation, when /> When the value of is greater than or equal to 0, /> The value of is When/> When the value of is less than 0, /> The value of is 0.

上述公式(9)中,表示对/>进行[X]+操作,当/>的取值大于或者等于0时,/>的取值就是/>而当/>小于0时,的取值就是0。In the above formula (9), Expressing support for/> Perform [X] + operation, when /> When the value of is greater than or equal to 0, /> The value of is /> And when/> When it is less than 0, The value of is 0.

1006、根据损失函数的函数值对行人再识别网络的网络参数进行更新。1006. Update the network parameters of the person re-identification network according to the function value of the loss function.

具体地,可以根据上述公式(9)所示的损失函数的函数值对行人再识别网络的网络参数进行更新。并且在更新的过程中使得公式(9)所示的损失函数的函数值越来越小。Specifically, the network parameters of the person re-identification network can be updated according to the function value of the loss function shown in the above formula (9), and the function value of the loss function shown in the formula (9) is made smaller and smaller during the updating process.

1007、确定行人再识别网络是否满足预设要求。1007. Determine whether the pedestrian re-identification network meets the preset requirements.

可选地,行人再识别网络满足预设要求,包括:行人再识别网络满足下列条件中的至少一种:Optionally, the pedestrian re-identification network meets preset requirements, including: the pedestrian re-identification network meets at least one of the following conditions:

(1)行人再识别网络的行人识别性能满足预设性能要求;(1) The pedestrian recognition performance of the pedestrian re-identification network meets the preset performance requirements;

(2)行人再识别网络的网络参数的更新次数大于或者等于预设次数;(2) The number of updates of the network parameters of the person re-identification network is greater than or equal to the preset number;

(3)损失函数的函数值小于或者等于预设阈值。(3) The function value of the loss function is less than or equal to the preset threshold.

在步骤1007中,当行人再识别网络满足上述条件(1)至(3)中的至少一个时,可以确定行人再识别网络满足预设要求,执行步骤1008,行人再识别网络的训练过程结束;而当行人再识别网络不满足上述条件(1)至(3)中的任意一个时,说明行人再识别网络尚未满足预设要求,需要继续对行人再识别网络进行训练,也就是重新执行步骤1004至1007,直到得到满足预设要求的行人再识别网络。In step 1007, when the pedestrian re-identification network satisfies at least one of the above conditions (1) to (3), it can be determined that the pedestrian re-identification network meets the preset requirements, and step 1008 is executed, and the training process of the pedestrian re-identification network ends; when the pedestrian re-identification network does not meet any of the above conditions (1) to (3), it means that the pedestrian re-identification network has not yet met the preset requirements, and it is necessary to continue to train the pedestrian re-identification network, that is, re-execute steps 1004 to 1007 until a pedestrian re-identification network that meets the preset requirements is obtained.

上述预设阈值可以是经验来灵活设置,当预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above-mentioned preset threshold can be flexibly set based on experience. When the preset threshold is set too large, the pedestrian recognition effect of the trained pedestrian re-identification network may not be good enough, and when the preset threshold is set too small, the function value of the loss function may be difficult to converge during training.

可选地,上述预设阈值的取值范围为[0,0.01]。Optionally, the value range of the above preset threshold is [0, 0.01].

具体地,上述预设阈值的取值可以为0.01。Specifically, the value of the preset threshold may be 0.01.

上述损失函数的函数值小于或者等于预设阈值,具体包括:第一差值小于第一预设阈值,第二差值小于第二预设阈值。The function value of the above loss function is less than or equal to a preset threshold, specifically including: the first difference is less than the first preset threshold, and the second difference is less than the second preset threshold.

上述第一预设阈值和第二预设阈值也可以根据经验来确定,当第一预设阈值和第二预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当第一预设阈值和第二预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above-mentioned first preset threshold and second preset threshold can also be determined based on experience. When the first preset threshold and the second preset threshold are set too large, the pedestrian recognition effect of the trained pedestrian re-identification network may not be good enough, and when the first preset threshold and the second preset threshold are set too small, the function value of the loss function may be difficult to converge during training.

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the first preset threshold is [0, 0.4].

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the first preset threshold is [0, 0.4].

具体地,上述第一预设阈值和上述第二阈值均可以取0.1。Specifically, the first preset threshold and the second threshold may both be 0.1.

1008、训练结束。1008. Training is over.

另外,在本申请中,几个训练图像来自于同一图像拍摄设备是指这几个训练图像是通过同一个图像拍摄设备进行拍摄得到的。In addition, in the present application, several training images coming from the same image capturing device means that these several training images are captured by the same image capturing device.

本申请中,在构造损失函数的过程中考虑到了来自于不同图像拍摄设备和相同图像拍摄设备的最难负样本图像,并在训练过程中使得第一差值和第二差值尽可能的减小,从而能够尽可能的消除图像拍摄设备本身信息对图像信息的干扰,使得训练出来的行人再识别网络能够更准确的从图像中进行特征的提取。In the present application, the most difficult negative sample images from different image capturing devices and the same image capturing device are taken into account in the process of constructing the loss function, and the first difference and the second difference are reduced as much as possible during the training process, so as to eliminate the interference of the image capturing device's own information on the image information as much as possible, so that the trained pedestrian re-identification network can more accurately extract features from the image.

具体地,在对行人再识别网络的训练过程中,通过优化行人再识别网络的网络参数使得第一差值和第二差值尽可能的小,从而使得最难正样本距离与第二最难负样本距离的差以及第二最难负样本距离和第一最难负样本距离的差尽可能的小,进而使得行人再识别网络能够尽可能的区分开最难本图像与第二最难负样本图像的特征,以及第二最难负样本图像与第一最难负样本图像的特征,从而使得训练出来的行人再识别网络能够更好更准确地对图像进行特征提取。Specifically, during the training process of the pedestrian re-identification network, the network parameters of the pedestrian re-identification network are optimized to make the first difference and the second difference as small as possible, thereby making the difference between the most difficult positive sample distance and the second most difficult negative sample distance and the difference between the second most difficult negative sample distance and the first most difficult negative sample distance as small as possible, so that the pedestrian re-identification network can distinguish the features of the most difficult positive image and the second most difficult negative sample image, as well as the features of the second most difficult negative sample image and the first most difficult negative sample image as much as possible, so that the trained pedestrian re-identification network can better and more accurately extract features from the image.

下面结合图8对上述步骤1004和步骤1005中根据一批训练图像确定损失函数的函数值过程进行详细说明。The process of determining the function value of the loss function according to a batch of training images in the above steps 1004 and 1005 is described in detail below in conjunction with FIG. 8 .

如图8所示,将一批训练图像输入到行人再识别网络之后,可以得到这一批训练图像的特征向量。接下来,可以从这一批训练图像中选择出多个锚点图像,并为该多个锚点图像中的每个锚点图像确定对应的最难正样本图像、第一最难负样本图像和第二最难负样本图像。As shown in Figure 8, after a batch of training images are input into the person re-identification network, feature vectors of the batch of training images can be obtained. Next, multiple anchor images can be selected from the batch of training images, and the corresponding most difficult positive sample image, the first most difficult negative sample image, and the second most difficult negative sample image can be determined for each of the multiple anchor images.

这样就可以得到很多个由四个训练图像(分别是锚点图像,锚点图像对应的最难正样本图像,锚点图像对应的第一最难负样本图像和锚点图像对应的第二最难负样本图像)组成的训练图像组,然后可以根据每个训练图像组中的训练图像的特征向量之间的距离关系可以确定一个第一损失函数。In this way, many training image groups consisting of four training images (the anchor image, the most difficult positive sample image corresponding to the anchor image, the first most difficult negative sample image corresponding to the anchor image, and the second most difficult negative sample image corresponding to the anchor image) can be obtained, and then a first loss function can be determined according to the distance relationship between the feature vectors of the training images in each training image group.

如图8所示,一共有N个训练图像组,根据该N个训练图像组一共可以确定N个第一损失函数,接下来,对这N个第一损失函数的函数值进行平均处理,就可以得到上述步骤1005中的损失函数的函数值。As shown in FIG8 , there are a total of N training image groups, and a total of N first loss functions can be determined based on the N training image groups. Next, the function values of the N first loss functions are averaged to obtain the function value of the loss function in the above step 1005.

应理解,上述N个训练图像组共包含N个锚点图像,该N个锚点图像各不相同,也就是说,每个训练图像组对应唯一的一个锚点图像。但是,不同的训练图像组中包含的其他训练图像(除锚点图像之外的其他图像)可以相同。例如,第一个训练图像组中的最难正样本图像与第二个训练组中的最难正样本图像相同。It should be understood that the above N training image groups contain a total of N anchor images, and the N anchor images are different, that is, each training image group corresponds to a unique anchor image. However, other training images (other images except anchor images) contained in different training image groups can be the same. For example, the most difficult positive sample image in the first training image group is the same as the most difficult positive sample image in the second training group.

再如,假设上述一批训练图像的数目为100,那么,就可以从该100个训练图像中的选择出10个(也可以是其他的数量,这里仅仅以10为例进行说明)锚点图像,然后从该100个训练图像中分别为每个锚点图像选择相应的最难正样本图像,第一最难负样本图像和第二最难负样本图像。从而得到10个训练图像组,根据该10个训练图像组可以得到10个第一损失函数的函数值,接下来,通过对该10个第一损失函数的函数值进行平均处理,就可以得到上述步骤1005中的损失函数的函数值。For another example, assuming that the number of the above batch of training images is 100, then 10 (or other numbers, here only 10 is used as an example) anchor images can be selected from the 100 training images, and then the corresponding most difficult positive sample image, the first most difficult negative sample image and the second most difficult negative sample image are selected for each anchor image from the 100 training images. Thus, 10 training image groups are obtained, and 10 function values of the first loss function can be obtained according to the 10 training image groups. Next, by averaging the function values of the 10 first loss functions, the function value of the loss function in the above step 1005 can be obtained.

下面对行人再识别网络的设计和训练过程进行详细的介绍。The design and training process of the pedestrian re-identification network are introduced in detail below.

本申请中的行人再识别网络可以采用现有的残差网络(例如,采用ResNet50)作为网络主体,并将最后的全连接层移除,在最后一层残差块(ResBlock)之后添加全局均值池化(global average pooling)层,并将获得2048维(也可以是其他的数值)的特征向量作为网络模型的输出。The pedestrian re-identification network in this application can adopt the existing residual network (for example, ResNet50) as the network body, remove the last fully connected layer, add a global average pooling layer after the last residual block (ResBlock), and obtain a 2048-dimensional (or other numerical) feature vector as the output of the network model.

在每个批次的训练图像中,每个摄像机可以采集4个人,每个人采集8张图,如果一个人的图像少于8张,就重复采集补满8张。In each batch of training images, each camera can capture images of 4 people, and 8 images are collected for each person. If there are fewer than 8 images of one person, the images are repeated to make up 8.

在对训练得到的行人再识别网络进行训练时,可以采用上述公式(9)作为损失函数,在进行测试时,不同的数据集的摄像机数可以不同,例如,对于DukeMTMC-reID数据集,摄像机有8个,这时公式(9)中的C=8;对于Market-1501数据集,摄像机有6个,这时公式(9)中的C=6。When training the trained person re-identification network, the above formula (9) can be used as the loss function. When testing, the number of cameras of different data sets can be different. For example, for the DukeMTMC-reID data set, there are 8 cameras, and C in formula (9) is 8; for the Market-1501 data set, there are 6 cameras, and C in formula (9) is 6.

上述公式(9)所示的损失函数中的两个参数可以分别是m1=0.1,m2=0.1。输入的训练图像可以被缩放为256x128像素大小,在训练时可以使用自适应矩估计(Adam)优化器来训练网络参数,学习率可以设置为2×10-4。在100轮训练后,学习率指数衰减,直到200轮学习后学习率可以设置为2×10-7,这时可以停止训练。The two parameters in the loss function shown in the above formula (9) can be m 1 = 0.1 and m 2 = 0.1 respectively. The input training image can be scaled to 256x128 pixels. During training, the adaptive moment estimation (Adam) optimizer can be used to train the network parameters, and the learning rate can be set to 2×10 -4 . After 100 rounds of training, the learning rate decays exponentially until the learning rate can be set to 2×10 -7 after 200 rounds of learning, at which time the training can be stopped.

根据本申请实施例的行人再识别网络的训练方法训练得到的行人再识别网络可以用于执行本申请实施例的行人再识别方法,下面结合附图对本申请实施例的行人再识别方法进行描述。The pedestrian re-identification network trained according to the training method of the pedestrian re-identification network of the embodiment of the present application can be used to execute the pedestrian re-identification method of the embodiment of the present application. The pedestrian re-identification method of the embodiment of the present application is described below in conjunction with the accompanying drawings.

图9是本申请实施例的行人再识别方法的示意性流程图。图9所示的行人再识别方法可以由本申请实施例的行人再识别装置执行(例如,可以由图12和图13所示的装置执行),图9所示的行人再识别方法包括步骤2001至2003,下面对步骤2001至2003进行详细的介绍。FIG9 is a schematic flow chart of a pedestrian re-identification method according to an embodiment of the present application. The pedestrian re-identification method shown in FIG9 can be performed by a pedestrian re-identification device according to an embodiment of the present application (for example, it can be performed by the devices shown in FIG12 and FIG13 ). The pedestrian re-identification method shown in FIG9 includes steps 2001 to 2003, and steps 2001 to 2003 are described in detail below.

2001、获取待识别图像。2001. Obtain an image to be identified.

2002、利用行人再识别网络对待识别图像进行处理,得到待识别图像的特征向量。2002. The image to be identified is processed using a person re-identification network to obtain a feature vector of the image to be identified.

其中,步骤2002中采用的行人再识别网络可以是根据本申请实施例的行人再识别网络的训练方法训练得到的,具体地,步骤2002中的行人再识别网络可以是通过图7所示的方法训练得到的。Among them, the pedestrian re-identification network used in step 2002 can be trained according to the training method of the pedestrian re-identification network of the embodiment of the present application. Specifically, the pedestrian re-identification network in step 2002 can be trained by the method shown in Figure 7.

2003、根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到待识别图像的识别结果。2003. Compare the feature vector of the image to be identified with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be identified.

本申请中,采用本申请实施例的行人再识别网络的训练方法训练得到的行人再识别网络能够更好的进行特征的提取,因此,采用该行人再识别网络对待识别图像进行处理,能够取得更好的行人识别结果。In the present application, the pedestrian re-identification network trained by the training method of the pedestrian re-identification network of the embodiment of the present application can better extract features. Therefore, using the pedestrian re-identification network to process the image to be identified can obtain better pedestrian recognition results.

可选地,上述步骤2003具体包括:根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,确定输出目标行人图像;输出目标行人图像以及目标行人图像的属性信息。Optionally, the above step 2003 specifically includes: determining to output a target pedestrian image according to comparing a feature vector of the image to be identified with a feature vector of an existing pedestrian image; and outputting the target pedestrian image and attribute information of the target pedestrian image.

其中,上述目标行人图像可以是已有的行人图像中特征向量与待识别图像的特征向量最相似的行人图像,该目标行人图像的属性信息包括该目标行人图像的拍摄时间,拍摄位置。另外,上述目标行人图像的属性信息中还可以包括行人的身份信息等。The target pedestrian image may be a pedestrian image whose feature vector is most similar to the feature vector of the image to be identified among the existing pedestrian images, and the attribute information of the target pedestrian image includes the shooting time and shooting location of the target pedestrian image. In addition, the attribute information of the target pedestrian image may also include the identity information of the pedestrian.

下面结合具体的测试结果对本申请实施例的行人再识别网络的行人识别的效果进行说明。The effect of pedestrian recognition of the pedestrian re-identification network in the embodiment of the present application is described below in combination with specific test results.

表1Table 1

表1示出了不同的方案在不同的数据集进行测试的结果,其中,测试结果包括Rank-1和平均精度均值(mean average precision,mAP),其中,Rank-1表示已有图像中特征向量与待识别图像的特征向量距离最近的图像与待识别图像属于同一个行人的概率。Table 1 shows the test results of different schemes on different data sets, where the test results include Rank-1 and mean average precision (mAP). Rank-1 indicates the probability that the image whose feature vector in the existing image is closest to the feature vector of the image to be identified belongs to the same pedestrian as the image to be identified.

在上述表1中,数据集1为Duke-SCT,数据集2为Market-SCT。In the above Table 1, dataset 1 is Duke-SCT and dataset 2 is Market-SCT.

其中,Duke-SCT是DukeMTMC-reID数据集的子集,Market-SCT是Market-1501数据集的子集。在获取Duke-SCT和Market-SCT时,我们从原有的数据集(DukeMTMC-reID和Market-1501)中,对训练数据做了如下处理:每个行人随机选某一个摄像机下的图像进行保留(不同行人可能选择到不同的摄像机),从而得到形成了新的数据集Duke-SCT和Market-SCT。同时,测试集保持不变。Duke-SCT is a subset of the DukeMTMC-reID dataset, and Market-SCT is a subset of the Market-1501 dataset. When obtaining Duke-SCT and Market-SCT, we processed the training data from the original datasets (DukeMTMC-reID and Market-1501) as follows: each pedestrian randomly selected an image from a camera and retained it (different pedestrians may select different cameras), thus forming new datasets Duke-SCT and Market-SCT. At the same time, the test set remained unchanged.

现有方案1:深层次的判别特征学习方法(a discriminative feature learningapproach for deep face),该方案是2016年在欧洲计算机视觉国际会议(europeanconference on computer vision,ECCV)发表的;Existing solution 1: a discriminative feature learning approach for deep face, which was published at the European Conference on Computer Vision (ECCV) in 2016;

现有方案2:深层超球面嵌入人脸识别(deep hypersphere embedding for facerecognition),该方案是2017年在国际计算机视觉与模式识别会议(conference oncomputer vision and pattern recognition,CVPR)发表的;Existing solution 2: deep hypersphere embedding for face recognition, which was published at the 2017 Conference on Computer Vision and Pattern Recognition (CVPR).

现有方案3:深层人脸识别的附加角度边缘损失(additive angular margin lossfor deep face recognition),该方案是2019年在CVPR发表的;Existing solution 3: Additive angular margin loss for deep face recognition, which was published in CVPR in 2019;

现有方案4:精炼部分池的人员检索(person retrieval with refined partpooling),该方案是2018年在ECCV发表的;Existing solution 4: person retrieval with refined partpooling, which was published in ECCV in 2018;

现有方案5:用于人员重新识别的部分对齐双线性表示(part-aligned bilinearrepresentations for person re-identification),该方案是2018年在ECCV发表的;Existing Solution 5: Part-aligned bilinear representations for person re-identification, which was published in ECCV in 2018;

现有方案6:学习具有多重粒度的判别特征进行人员重新识别(learningdiscriminative features with multiple granularities for person re-Identification),该方案是2018年在美国计算机协会多媒体国际会议(association forcomputing machinery international conference on multimedia,ACMMM)发表的。Existing Solution 6: Learning discriminative features with multiple granularities for person re-Identification, which was published at the Association for Computing Machinery International Conference on Multimedia (ACMMM) in 2018.

由表1可知,本申请方案无论在数据集1还是在数据集2的Rank-1和mAP均优于现有方案,具有较好的识别效果。As can be seen from Table 1, the Rank-1 and mAP of the proposed solution are superior to the existing solutions in both dataset 1 and dataset 2, and have better recognition effect.

图10是本申请实施例的行人再识别网络的训练装置的示意性框图。图10所示的行人再识别网络的训练装置8000包括获取单元8001和训练单元8002。Fig. 10 is a schematic block diagram of a training device for a person re-identification network according to an embodiment of the present application. The training device 8000 for a person re-identification network shown in Fig. 10 comprises an acquisition unit 8001 and a training unit 8002.

获取单元8001和训练单元8002可以用于执行本申请实施例的行人再识别网络的训练方法。The acquisition unit 8001 and the training unit 8002 may be used to execute the training method of the person re-identification network of the embodiment of the present application.

具体地,获取单元8001可以执行上述步骤1001和1002,训练单元8002可以执行上述步骤1003至1008。Specifically, the acquisition unit 8001 may execute the above steps 1001 and 1002, and the training unit 8002 may execute the above steps 1003 to 1008.

上述图10所示的装置8000中的获取单元8001可以相当于图11所示的装置9000中的通信接口9003,通过该通信接口9003可以获得相应的训练图像,或者,上述获取单元8001也可以提相当于处理器9002,此时可以通过处理器9002从存储器9001中获取训练图像,或者通过通信接口9003从外部获取训练图像。另外,装置8000中的训练单元8002可以相当于装置9000中的处理器9002。The acquisition unit 8001 in the device 8000 shown in FIG. 10 may be equivalent to the communication interface 9003 in the device 9000 shown in FIG. 11, and the corresponding training image may be obtained through the communication interface 9003. Alternatively, the acquisition unit 8001 may be equivalent to the processor 9002, and the training image may be obtained from the memory 9001 through the processor 9002, or the training image may be obtained from the outside through the communication interface 9003. In addition, the training unit 8002 in the device 8000 may be equivalent to the processor 9002 in the device 9000.

图11是本申请实施例的行人再识别网络的训练装置的硬件结构示意图。图11所示的行人再识别网络的训练装置9000(该装置9000具体可以是一种计算机设备)包括存储器9001、处理器9002、通信接口9003以及总线9004。其中,存储器9001、处理器9002、通信接口9003通过总线9004实现彼此之间的通信连接。FIG11 is a schematic diagram of the hardware structure of a training device for a person re-identification network according to an embodiment of the present application. The training device 9000 for a person re-identification network shown in FIG11 (the device 9000 may be a computer device) includes a memory 9001, a processor 9002, a communication interface 9003, and a bus 9004. The memory 9001, the processor 9002, and the communication interface 9003 are connected to each other through the bus 9004.

存储器9001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器9001可以存储程序,当存储器9001中存储的程序被处理器9002执行时,处理器9002用于执行本申请实施例的行人再识别网络的训练方法的各个步骤。The memory 9001 may be a read-only memory (ROM), a static storage device, a dynamic storage device or a random access memory (RAM). The memory 9001 may store a program. When the program stored in the memory 9001 is executed by the processor 9002, the processor 9002 is used to execute each step of the training method of the pedestrian re-identification network of the embodiment of the present application.

处理器9002可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的行人再识别网络的训练方法。Processor 9002 can adopt a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), a graphics processing unit (GPU) or one or more integrated circuits to execute relevant programs to implement the training method of the pedestrian re-identification network of the embodiment of the present application.

处理器9002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的行人再识别网络的训练方法的各个步骤可以通过处理器9002中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 9002 may also be an integrated circuit chip with signal processing capability. In the implementation process, each step of the training method of the pedestrian re-identification network of the present application may be completed by hardware integrated logic circuits in the processor 9002 or software instructions.

上述处理器9002还可以是通用处理器、数字信号处理器(digital signalprocessing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gatearray,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器9001,处理器9002读取存储器9001中的信息,结合其硬件完成本行人再识别网络的训练装置中包括的单元所需执行的功能,或者执行本申请实施例的行人再识别网络的训练方法。The processor 9002 may also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, or discrete hardware components. The methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed by a hardware decoding processor, or may be executed by a combination of hardware and software modules in the decoding processor. The software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, etc. The storage medium is located in the memory 9001, and the processor 9002 reads the information in the memory 9001, and completes the functions required to be performed by the units included in the training device of the pedestrian re-identification network in combination with its hardware, or executes the training method of the pedestrian re-identification network of the embodiment of the present application.

通信接口9003使用例如但不限于收发器一类的收发装置,来实现装置9000与其他设备或通信网络之间的通信。例如,可以通过通信接口9003获取待识别图像。The communication interface 9003 uses a transceiver device such as, but not limited to, a transceiver to implement communication between the device 9000 and other devices or a communication network. For example, the image to be recognized can be obtained through the communication interface 9003.

总线9004可包括在装置9000各个部件(例如,存储器9001、处理器9002、通信接口9003)之间传送信息的通路。The bus 9004 may include a path for transmitting information between various components of the device 9000 (eg, the memory 9001 , the processor 9002 , and the communication interface 9003 ).

图12是本申请实施例的行人再识别装置的示意性框图。图12所示的行人再识别装置10000包括获取单元10001和识别单元10002。FIG12 is a schematic block diagram of a pedestrian re-identification device according to an embodiment of the present application. The pedestrian re-identification device 10000 shown in FIG12 includes an acquisition unit 10001 and a recognition unit 10002 .

获取单元10001和识别单元10002可以用于执行本申请实施例的行人再识别方法。The acquisition unit 10001 and the recognition unit 10002 can be used to execute the pedestrian re-recognition method of the embodiment of the present application.

具体地,获取单元10001可以执行上述步骤6001,识别单元10002可以执行上述步骤6002。Specifically, the acquisition unit 10001 may execute the above step 6001, and the identification unit 10002 may execute the above step 6002.

上述图12所示的装置10000中的获取单元10001可以相当于图13所示的装置11000中的通信接口11003,通过该通信接口11003可以获得待识别图像,或者,上述获取单元10001也可以提相当于处理器11002,此时可以通过处理器11002从存储器11001中获取待识别图像,或者通过通信接口11003从外部获取待识别图像。The acquisition unit 10001 in the device 10000 shown in Figure 12 above can be equivalent to the communication interface 11003 in the device 11000 shown in Figure 13, and the image to be identified can be obtained through the communication interface 11003, or the acquisition unit 10001 can be equivalent to the processor 11002. At this time, the image to be identified can be obtained from the memory 11001 through the processor 11002, or the image to be identified can be obtained from the outside through the communication interface 11003.

另外,上述图12所示的装置10000中的识别单元10002相当于图13所示的装置11000中处理器11002。In addition, the identification unit 10002 in the device 10000 shown in FIG. 12 is equivalent to the processor 11002 in the device 11000 shown in FIG. 13 .

图13是本申请实施例的行人再识别装置的硬件结构示意图。与上述装置10000类似,图13所示的行人再识别装置11000包括存储器11001、处理器11002、通信接口11003以及总线11004。其中,存储器11001、处理器11002、通信接口11003通过总线11004实现彼此之间的通信连接。FIG13 is a schematic diagram of the hardware structure of the pedestrian re-identification device of the embodiment of the present application. Similar to the above-mentioned device 10000, the pedestrian re-identification device 11000 shown in FIG13 includes a memory 11001, a processor 11002, a communication interface 11003 and a bus 11004. Among them, the memory 11001, the processor 11002, and the communication interface 11003 are connected to each other through the bus 11004.

存储器11001可以是ROM,静态存储设备和RAM。存储器11001可以存储程序,当存储器11001中存储的程序被处理器11002执行时,处理器11002和通信接口11003用于执行本申请实施例的行人再识别方法的各个步骤。The memory 11001 may be a ROM, a static storage device, or a RAM. The memory 11001 may store a program. When the program stored in the memory 11001 is executed by the processor 11002, the processor 11002 and the communication interface 11003 are used to execute the various steps of the pedestrian re-identification method of the embodiment of the present application.

处理器11002可以采用通用的,CPU,微处理器,ASIC,GPU或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的行人再识别装置中的单元所需执行的功能,或者执行本申请实施例的行人再识别方法。Processor 11002 can be a general-purpose CPU, microprocessor, ASIC, GPU or one or more integrated circuits, which are used to execute relevant programs to implement the functions required to be performed by the units in the pedestrian re-identification device of the embodiment of the present application, or to execute the pedestrian re-identification method of the embodiment of the present application.

处理器11002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的行人再识别方法的各个步骤可以通过处理器11002中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 11002 may also be an integrated circuit chip with signal processing capability. In the implementation process, each step of the pedestrian re-identification method of the embodiment of the present application may be completed by an integrated logic circuit of hardware in the processor 11002 or by instructions in the form of software.

上述处理器11002还可以是通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器11001,处理器11002读取存储器11001中的信息,结合其硬件完成本申请实施例的行人再识别装置中包括的单元所需执行的功能,或者执行本申请实施例的行人再识别方法。The processor 11002 may also be a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware component. The methods, steps and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed by a hardware decoding processor, or may be executed by a combination of hardware and software modules in the decoding processor. The software module may be located in a mature storage medium in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc. The storage medium is located in the memory 11001, and the processor 11002 reads the information in the memory 11001, and combines its hardware to complete the functions required to be performed by the units included in the pedestrian re-identification device of the embodiment of the present application, or executes the pedestrian re-identification method of the embodiment of the present application.

通信接口11003使用例如但不限于收发器一类的收发装置,来实现装置11000与其他设备或通信网络之间的通信。例如,可以通过通信接口11003获取待识别图像。The communication interface 11003 uses a transceiver device such as, but not limited to, a transceiver to implement communication between the device 11000 and other devices or a communication network. For example, the image to be recognized can be obtained through the communication interface 11003.

总线11004可包括在装置11000各个部件(例如,存储器11001、处理器11002、通信接口11003)之间传送信息的通路。The bus 11004 may include a path for transmitting information between various components of the device 11000 (eg, the memory 11001 , the processor 11002 , and the communication interface 11003 ).

应注意,尽管上述装置9000和装置11000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,装置9000和装置11000还可以包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置9000和装置11000还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置9000和装置11000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图11和图13中所示的全部器件。It should be noted that although the above-mentioned apparatus 9000 and apparatus 11000 only show a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the apparatus 9000 and apparatus 11000 may also include other devices necessary for normal operation. At the same time, according to specific needs, those skilled in the art should understand that the apparatus 9000 and apparatus 11000 may also include hardware devices for implementing other additional functions. In addition, those skilled in the art should understand that the apparatus 9000 and apparatus 11000 may also only include the devices necessary for implementing the embodiments of the present application, and do not necessarily include all the devices shown in Figures 11 and 13.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application can be essentially or partly embodied in the form of a software product that contributes to the prior art. The computer software product is stored in a storage medium and includes several instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in each embodiment of the present application. The aforementioned storage medium includes: various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art who is familiar with the present technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (14)

1.一种行人再识别网络的训练方法,其特征在于,包括:1. A method for training a person re-identification network, comprising: 步骤1:获取M个训练图像以及所述M个训练图像的标注数据,所述M个训练图像中的每个训练图像包括行人,所述每个训练图像的标注数据包括所述每个训练图像中的行人所在的包围框和行人标识信息,其中,不同的行人对应不同的行人标识信息,在所述M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备,M为大于1的整数;Step 1: obtaining M training images and annotation data of the M training images, wherein each of the M training images includes a pedestrian, and the annotation data of each training image includes a bounding box where the pedestrian in each training image is located and pedestrian identification information, wherein different pedestrians correspond to different pedestrian identification information, and among the M training images, the training images with the same pedestrian identification information are from the same image capture device, and M is an integer greater than 1; 步骤2:对所述行人再识别网络的网络参数进行初始化处理,以得到所述行人再识别网络的网络参数的初始值;Step 2: Initializing the network parameters of the pedestrian re-identification network to obtain initial values of the network parameters of the pedestrian re-identification network; 步骤3:将所述M个训练图像中的一批训练图像输入到所述行人再识别网络进行特征提取,得到所述一批训练图像中的每个训练图像的特征向量;Step 3: inputting a batch of training images from the M training images into the person re-identification network for feature extraction to obtain a feature vector of each training image in the batch of training images; 其中,所述一批训练图像包括N个锚点图像,所述N个锚点图像是所述一批训练图像中的任意N个训练图像,所述N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像,N为正整数;The batch of training images includes N anchor images, the N anchor images are any N training images in the batch of training images, each of the N anchor images corresponds to a most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image, and N is a positive integer; 所述每个锚点图像对应的最难正样本图像是所述一批训练图像中与所述每个锚点图像的行人标识信息相同,并且与所述每个锚点图像的特征向量之间的距离最远的训练图像,所述每个锚点图像对应的第一最难负样本图像是所述一批训练图像中与所述每个锚点图像来自于同一图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像,所述每个锚点图像对应的第二最难负样本图像是所述一批训练图像中与所述每个锚点图像来自不同图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像;The most difficult positive sample image corresponding to each anchor image is a training image in the batch of training images that has the same pedestrian identification information as each anchor image and has the farthest distance from the feature vector of each anchor image; the first most difficult negative sample image corresponding to each anchor image is a training image in the batch of training images that comes from the same image capture device as each anchor image, has different pedestrian identification information from each anchor image, and has the shortest distance from the feature vector of each anchor image; the second most difficult negative sample image corresponding to each anchor image is a training image in the batch of training images that comes from a different image capture device from each anchor image, has different pedestrian identification information from each anchor image, and has the shortest distance from the feature vector of each anchor image; 步骤4:根据所述一批训练图像的特征向量确定损失函数的函数值,所述损失函数的函数值为N个第一损失函数的函数值经过平均处理得到的;Step 4: determining a function value of a loss function according to the feature vectors of the batch of training images, wherein the function value of the loss function is obtained by averaging the function values of N first loss functions; 其中,所述N个第一损失函数中的每个第一损失函数的函数值是根据所述N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的,所述每个锚点图像对应的第一差值是所述每个锚点图像对应的最难正样本距离与所述每个锚点图像对应的第二最难负样本距离的差,所述每个锚点图像对应的第二差值是所述每个锚点图像对应的第二最难负样本距离与所述每个锚点图像对应的第一最难负样本距离的差,所述每个锚点图像对应的最难正样本距离为所述每个锚点图像对应的最难正样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第二最难负样本距离为所述每个锚点图像对应的第二最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第一最难负样本距离为所述每个锚点图像对应的第一最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离;Wherein, the function value of each first loss function of the N first loss functions is calculated according to the first difference and the second difference corresponding to each anchor image of the N anchor images, the first difference corresponding to each anchor image is the difference between the most difficult positive sample distance corresponding to each anchor image and the second most difficult negative sample distance corresponding to each anchor image, the second difference corresponding to each anchor image is the difference between the second most difficult negative sample distance corresponding to each anchor image and the first most difficult negative sample distance corresponding to each anchor image, the most difficult positive sample distance corresponding to each anchor image is the distance between the feature vector of the most difficult positive sample image corresponding to each anchor image and the feature vector of each anchor image, the second most difficult negative sample distance corresponding to each anchor image is the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor image and the feature vector of each anchor image, and the first most difficult negative sample distance corresponding to each anchor image is the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor image and the feature vector of each anchor image; 步骤5:根据所述损失函数的函数值对所述行人再识别网络的网络参数进行更新;Step 5: updating the network parameters of the person re-identification network according to the function value of the loss function; 重复上述步骤3至步骤5,直到所述行人再识别网络满足预设要求。Repeat steps 3 to 5 until the pedestrian re-identification network meets the preset requirements. 2.如权利要求1所述的训练方法,其特征在于,所述行人再识别网络满足预设要求,包括:2. The training method according to claim 1, wherein the person re-identification network meets preset requirements, including: 在满足下列条件中的至少一种时,所述行人再识别网络满足预设要求:The pedestrian re-identification network meets the preset requirements when at least one of the following conditions is met: 所述行人再识别网络的训练次数大于或者等于预设次数;The number of training times of the pedestrian re-identification network is greater than or equal to a preset number of times; 所述损失函数的函数值小于或者等于预设阈值;The function value of the loss function is less than or equal to a preset threshold; 所述行人再识别网络的识别性能达到预设要求。The recognition performance of the pedestrian re-identification network meets the preset requirements. 3.如权利要求2所述的训练方法,其特征在于,所述损失函数的函数值小于或者等于预设阈值,包括:3. The training method according to claim 2, wherein the function value of the loss function is less than or equal to a preset threshold, comprising: 所述第一差值小于第一预设阈值,所述第二差值小于第二预设阈值。The first difference is smaller than a first preset threshold, and the second difference is smaller than a second preset threshold. 4.如权利要求1-3中任一项所述的训练方法,其特征在于,所述M个训练图像为来自多个图像拍摄设备的训练图像,其中,来自不同图像拍摄设备的训练图像的标记数据是单独标记得到的。4. The training method according to any one of claims 1 to 3, characterized in that the M training images are training images from multiple image capturing devices, wherein the labeling data of the training images from different image capturing devices are obtained by separate labeling. 5.一种行人再识别方法,其特征在于,包括:5. A pedestrian re-identification method, comprising: 获取待识别图像;Obtain an image to be recognized; 利用行人再识别网络对待识别图像进行处理,得到所述待识别图像的特征向量,其中,所述行人再识别网络是根据如权利要求1-4中的任一项所述的训练方法训练得到的;Processing the image to be identified using a person re-identification network to obtain a feature vector of the image to be identified, wherein the person re-identification network is trained according to the training method according to any one of claims 1 to 4; 根据所述待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到所述待识别图像的识别结果。The recognition result of the image to be recognized is obtained by comparing the feature vector of the image to be recognized with the feature vector of the existing pedestrian image. 6.一种行人再识别网络的训练装置,其特征在于,包括:6. A training device for a person re-identification network, comprising: 获取单元,用于执行步骤1;An acquisition unit, used to execute step 1; 步骤1:获取M个训练图像以及所述M个训练图像的标注数据,所述M个训练图像中的每个训练图像包括行人,所述每个训练图像的标注数据包括所述每个训练图像中的行人所在的包围框和行人标识信息,其中,不同的行人对应不同的行人标识信息,在所述M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备,M为大于1的整数;Step 1: obtaining M training images and annotation data of the M training images, wherein each of the M training images includes a pedestrian, and the annotation data of each training image includes a bounding box where the pedestrian in each training image is located and pedestrian identification information, wherein different pedestrians correspond to different pedestrian identification information, and among the M training images, the training images with the same pedestrian identification information are from the same image capture device, and M is an integer greater than 1; 训练单元,用于执行步骤2;A training unit, used to execute step 2; 步骤2:对所述行人再识别网络的网络参数进行初始化处理,以得到所述行人再识别网络的网络参数的初始值;Step 2: Initializing the network parameters of the pedestrian re-identification network to obtain initial values of the network parameters of the pedestrian re-identification network; 所述训练单元还用于重复执行步骤3至步骤5,直到所述行人再识别网络满足预设要求;The training unit is further used to repeatedly execute steps 3 to 5 until the pedestrian re-identification network meets preset requirements; 步骤3:将所述M个训练图像中的一批训练图像输入到所述行人再识别网络进行特征提取,得到所述一批训练图像中的每个训练图像的特征向量;Step 3: inputting a batch of training images from the M training images into the person re-identification network for feature extraction to obtain a feature vector of each training image in the batch of training images; 其中,所述一批训练图像包括N个锚点图像,所述N个锚点图像是所述一批训练图像中的任意N个训练图像,所述N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像,N为正整数;The batch of training images includes N anchor images, the N anchor images are any N training images in the batch of training images, each of the N anchor images corresponds to a most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image, and N is a positive integer; 所述每个锚点图像对应的最难正样本图像是所述一批训练图像中与所述每个锚点图像的行人标识信息相同,并且与所述每个锚点图像的特征向量之间的距离最远的训练图像,所述每个锚点图像对应的第一最难负样本图像是所述一批训练图像中与所述每个锚点图像来自于同一图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像,所述每个锚点图像对应的第二最难负样本图像是所述一批训练图像中与所述每个锚点图像来自不同图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像;The most difficult positive sample image corresponding to each anchor image is a training image in the batch of training images that has the same pedestrian identification information as each anchor image and has the farthest distance from the feature vector of each anchor image; the first most difficult negative sample image corresponding to each anchor image is a training image in the batch of training images that comes from the same image capture device as each anchor image, has different pedestrian identification information from each anchor image, and has the shortest distance from the feature vector of each anchor image; the second most difficult negative sample image corresponding to each anchor image is a training image in the batch of training images that comes from a different image capture device from each anchor image, has different pedestrian identification information from each anchor image, and has the shortest distance from the feature vector of each anchor image; 步骤4:根据所述一批训练图像的特征向量确定损失函数的函数值,所述损失函数的函数值为N个第一损失函数的函数值经过平均处理得到的;Step 4: determining a function value of a loss function according to the feature vectors of the batch of training images, wherein the function value of the loss function is obtained by averaging the function values of N first loss functions; 其中,所述N个第一损失函数中的每个第一损失函数的函数值是根据所述N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的,所述每个锚点图像对应的第一差值是所述每个锚点图像对应的最难正样本距离与所述每个锚点图像对应的第二最难负样本距离的差,所述每个锚点图像对应的第二差值是所述每个锚点图像对应的第二最难负样本距离与所述每个锚点图像对应的第一最难负样本距离的差,所述每个锚点图像对应的最难正样本距离为所述每个锚点图像对应的最难正样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第二最难负样本距离为所述每个锚点图像对应的第二最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第一最难负样本距离为所述每个锚点图像对应的第一最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离;Wherein, the function value of each first loss function of the N first loss functions is calculated according to the first difference and the second difference corresponding to each anchor image of the N anchor images, the first difference corresponding to each anchor image is the difference between the most difficult positive sample distance corresponding to each anchor image and the second most difficult negative sample distance corresponding to each anchor image, the second difference corresponding to each anchor image is the difference between the second most difficult negative sample distance corresponding to each anchor image and the first most difficult negative sample distance corresponding to each anchor image, the most difficult positive sample distance corresponding to each anchor image is the distance between the feature vector of the most difficult positive sample image corresponding to each anchor image and the feature vector of each anchor image, the second most difficult negative sample distance corresponding to each anchor image is the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor image and the feature vector of each anchor image, and the first most difficult negative sample distance corresponding to each anchor image is the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor image and the feature vector of each anchor image; 步骤5:根据所述损失函数的函数值对所述行人再识别网络的网络参数进行更新。Step 5: Update the network parameters of the person re-identification network according to the function value of the loss function. 7.如权利要求6所述的训练装置,其特征在于,所述行人再识别网络满足预设要求,包括:7. The training device according to claim 6, wherein the pedestrian re-identification network meets the preset requirements, including: 在满足下列条件中的至少一种时,所述行人再识别网络满足预设要求:The pedestrian re-identification network meets the preset requirements when at least one of the following conditions is met: 所述行人再识别网络的训练次数大于或者等于预设次数;The number of training times of the pedestrian re-identification network is greater than or equal to a preset number of times; 所述损失函数的函数值小于或者等于预设阈值;The function value of the loss function is less than or equal to a preset threshold; 所述行人再识别网络的识别性能达到预设要求。The recognition performance of the pedestrian re-identification network meets the preset requirements. 8.如权利要求7所述的训练装置,其特征在于,所述损失函数的函数值小于或者等于预设阈值,包括:8. The training device according to claim 7, wherein the function value of the loss function is less than or equal to a preset threshold, comprising: 所述第一差值小于第一预设阈值,所述第二差值小于第二预设阈值。The first difference is smaller than a first preset threshold, and the second difference is smaller than a second preset threshold. 9.如权利要求6-8中任一项所述的训练装置,其特征在于,所述M个训练图像为来自多个图像拍摄设备的训练图像,其中,来自不同图像拍摄设备的训练图像的标记数据是单独标记得到的。9. The training device as described in any one of claims 6-8 is characterized in that the M training images are training images from multiple image capturing devices, wherein the labeling data of the training images from different image capturing devices are obtained by separate labeling. 10.一种行人再识别装置,其特征在于,包括:10. A pedestrian re-identification device, comprising: 获取单元,用于获取待识别图像;An acquisition unit, used for acquiring an image to be recognized; 识别单元,用于利用行人再识别网络对待识别图像进行处理,得到所述待识别图像的特征向量,其中,所述行人再识别网络是根据如权利要求1-4中的任一项所述的训练方法训练得到的;A recognition unit, configured to process the image to be recognized by using a person re-recognition network to obtain a feature vector of the image to be recognized, wherein the person re-recognition network is trained according to the training method according to any one of claims 1 to 4; 所述识别单元还用于根据所述待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到所述待识别图像的识别结果。The recognition unit is further used to compare the feature vector of the image to be recognized with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be recognized. 11.一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1-4中任一项所述的训练方法。11. A computer-readable storage medium, characterized in that the computer-readable medium stores program codes for execution by a device, wherein the program codes include codes for executing the training method according to any one of claims 1 to 4. 12.一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求5所述的行人再识别方法。12. A computer-readable storage medium, characterized in that the computer-readable medium stores program codes for execution by a device, the program codes including codes for executing the pedestrian re-identification method as claimed in claim 5. 13.一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1-4中任一项所述的训练方法。13. A chip, characterized in that the chip comprises a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to execute the training method as described in any one of claims 1-4. 14.一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求5所述的行人再识别方法。14. A chip, characterized in that the chip comprises a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to execute the pedestrian re-identification method as claimed in claim 5.
CN201910839017.9A 2019-09-05 2019-09-05 Person re-identification network training method, person re-identification method and device Active CN112446270B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910839017.9A CN112446270B (en) 2019-09-05 2019-09-05 Person re-identification network training method, person re-identification method and device
PCT/CN2020/113041 WO2021043168A1 (en) 2019-09-05 2020-09-02 Person re-identification network training method and person re-identification method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910839017.9A CN112446270B (en) 2019-09-05 2019-09-05 Person re-identification network training method, person re-identification method and device

Publications (2)

Publication Number Publication Date
CN112446270A CN112446270A (en) 2021-03-05
CN112446270B true CN112446270B (en) 2024-05-14

Family

ID=74733092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910839017.9A Active CN112446270B (en) 2019-09-05 2019-09-05 Person re-identification network training method, person re-identification method and device

Country Status (2)

Country Link
CN (1) CN112446270B (en)
WO (1) WO2021043168A1 (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949534B (en) * 2021-03-15 2024-09-27 鹏城实验室 Pedestrian re-identification method, intelligent terminal and computer readable storage medium
CN113095174B (en) * 2021-03-29 2024-07-23 深圳力维智联技术有限公司 Re-identification model training method, device, equipment and readable storage medium
CN113096080B (en) * 2021-03-30 2024-01-16 四川大学华西第二医院 Image analysis method and system
CN114943909B (en) * 2021-03-31 2023-04-18 华为技术有限公司 Method, device, equipment and system for identifying motion area
CN112861825B (en) * 2021-04-07 2023-07-04 北京百度网讯科技有限公司 Model training method, pedestrian re-recognition method, device and electronic equipment
CN113177469B (en) * 2021-04-27 2024-04-12 北京百度网讯科技有限公司 Training method, device, electronic equipment and medium for human attribute detection model
CN113536891B (en) * 2021-05-10 2023-01-03 新疆爱华盈通信息技术有限公司 Pedestrian traffic statistical method, storage medium and electronic equipment
CN113449601B (en) * 2021-05-28 2023-05-16 国家计算机网络与信息安全管理中心 Pedestrian re-recognition model training and recognition method and device based on progressive smooth loss
CN113449966B (en) * 2021-06-03 2023-04-07 湖北北新建材有限公司 Gypsum board equipment inspection method and system
CN113591545B (en) * 2021-06-11 2024-05-24 北京师范大学珠海校区 Deep learning-based multi-level feature extraction network pedestrian re-identification method
CN113255604B (en) 2021-06-29 2021-10-15 苏州浪潮智能科技有限公司 Pedestrian re-identification method, device, equipment and medium based on deep learning network
CN113408492B (en) * 2021-07-23 2022-06-14 四川大学 A pedestrian re-identification method based on global-local feature dynamic alignment
CN114298961A (en) * 2021-08-04 2022-04-08 腾讯科技(深圳)有限公司 Image processing method, device, device and storage medium
CN113762153B (en) * 2021-09-07 2024-04-02 北京工商大学 Novel tailing pond detection method and system based on remote sensing data
CN114494930B (en) * 2021-09-09 2023-09-22 马上消费金融股份有限公司 Training method and device for voice and image synchronism measurement model
CN114240997B (en) * 2021-11-16 2023-07-28 南京云牛智能科技有限公司 Intelligent building online trans-camera multi-target tracking method
CN114359665B (en) * 2021-12-27 2024-03-26 北京奕斯伟计算技术股份有限公司 Full-task face recognition model training method and device, face recognition method
CN114863488B (en) * 2022-06-08 2024-08-13 电子科技大学成都学院 Pedestrian re-identification-based public place polymorphic pedestrian target identification tracking method, electronic equipment and storage medium
CN115147871B (en) * 2022-07-19 2024-06-11 北京龙智数科科技服务有限公司 Pedestrian re-identification method in shielding environment
CN115546583B (en) * 2022-10-10 2025-06-24 广州大学 Data augmentation and training method and training device for person re-identification network model
CN115952731B (en) * 2022-12-20 2024-01-16 哈尔滨工业大学 Active vibration control method, device and equipment for wind turbine blade
CN115641559B (en) * 2022-12-23 2023-06-02 深圳佑驾创新科技有限公司 Target matching method, device and storage medium for looking-around camera group
CN116824695B (en) * 2023-06-07 2024-07-19 南通大学 Pedestrian re-identification non-local defense method based on feature denoising

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108754A (en) * 2017-12-15 2018-06-01 北京迈格威科技有限公司 The training of identification network, again recognition methods, device and system again
CN109784166A (en) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 The method and device that pedestrian identifies again
CN109800794A (en) * 2018-12-27 2019-05-24 上海交通大学 A kind of appearance similar purpose identifies fusion method and system across camera again
CN109977798A (en) * 2019-03-06 2019-07-05 中山大学 The exposure mask pond model training identified again for pedestrian and pedestrian's recognition methods again
CN110046579A (en) * 2019-04-18 2019-07-23 重庆大学 A kind of pedestrian's recognition methods again of depth Hash

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10395385B2 (en) * 2017-06-27 2019-08-27 Qualcomm Incorporated Using object re-identification in video surveillance
CN109344787B (en) * 2018-10-15 2021-06-08 浙江工业大学 A specific target tracking method based on face recognition and pedestrian re-identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108754A (en) * 2017-12-15 2018-06-01 北京迈格威科技有限公司 The training of identification network, again recognition methods, device and system again
CN109784166A (en) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 The method and device that pedestrian identifies again
CN109800794A (en) * 2018-12-27 2019-05-24 上海交通大学 A kind of appearance similar purpose identifies fusion method and system across camera again
CN109977798A (en) * 2019-03-06 2019-07-05 中山大学 The exposure mask pond model training identified again for pedestrian and pedestrian's recognition methods again
CN110046579A (en) * 2019-04-18 2019-07-23 重庆大学 A kind of pedestrian's recognition methods again of depth Hash

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-identification;Weihua Chen et al.;《2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR)》;20171109;全文 *
Deep adaptive feature embedding with local sample distributions for person re-identification;Lin Wu et al.;《Pattern Recognition》;20170831;第73卷(第2018期);全文 *
基于属性和身份特征融合的行人再识别技术研究;胡潇;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190815(第08期);全文 *
基于度量学习的行人重识别方法研究;王金;《中国博士学位论文全文数据库 信息科技辑》;20181015(第10期);全文 *

Also Published As

Publication number Publication date
WO2021043168A1 (en) 2021-03-11
CN112446270A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN112446270B (en) Person re-identification network training method, person re-identification method and device
US12314343B2 (en) Image classification method, neural network training method, and apparatus
CN112446398B (en) Image classification method and device
CN110188795B (en) Image classification method, data processing method and device
CN111914997B (en) Methods for training neural networks, image processing methods and devices
CN111291809B (en) A processing device, method and storage medium
US12131521B2 (en) Image classification method and apparatus
CN112668366B (en) Image recognition method, device, computer readable storage medium and chip
US12039440B2 (en) Image classification method and apparatus, and image classification model training method and apparatus
CN111797882B (en) Image classification method and device
CN113011562B (en) Model training method and device
CN112529904B (en) Image semantic segmentation method, device, computer readable storage medium and chip
CN113807183B (en) Model training methods and related equipment
CN111667399A (en) Method for training style migration model, method and device for video style migration
CN113065645B (en) Twin attention network, image processing method and device
CN112446835B (en) Image restoration method, image restoration network training method, device and storage medium
CN113191489B (en) Training method of binary neural network model, image processing method and device
CN110222718B (en) Image processing method and device
WO2022179606A1 (en) Image processing method and related apparatus
CN112464930A (en) Target detection network construction method, target detection method, device and storage medium
CN114693986A (en) Training method of active learning model, image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220214

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technologies Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant