CN113837024B - A cross-border tracking method based on multimodality - Google Patents
A cross-border tracking method based on multimodality Download PDFInfo
- Publication number
- CN113837024B CN113837024B CN202111025677.7A CN202111025677A CN113837024B CN 113837024 B CN113837024 B CN 113837024B CN 202111025677 A CN202111025677 A CN 202111025677A CN 113837024 B CN113837024 B CN 113837024B
- Authority
- CN
- China
- Prior art keywords
- similarity
- mode
- image
- target
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-mode-based cross-border tracking method, which is characterized in that the difference between two modes is reserved, then two mode query target source pictures of the same target are utilized to be searched in galley pedestrian libraries, and then similarity weighted reasoning measurement methods are carried out on search results of the two modes to obtain a final recognition result, so that the problems of poor cross-mode pedestrian re-recognition effect, low accuracy and low calculation efficiency of visible light and infrared images are effectively solved.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a multi-mode-based cross-border tracking method.
Background
The conventional RGB-RGB single-mode pedestrian re-recognition technology can only solve recognition tasks under sufficient light, while at night or in places with darker light, visible light cameras have little need, and criminals usually move at night. Most of the present monitoring cameras are configured with infrared and visible light functions, clear RGB images can be acquired when the light is sufficient, and the infrared functions can be started to acquire the infrared images when the light is insufficient, so that favorable conditions are provided for the research of the cross-mode pedestrian re-identification.
Pedestrian re-recognition is a popular research topic in the field of computer vision, mainly solves the problem of recognition and retrieval of pedestrians across cameras and under a scene, and is widely applied to the fields of security and intelligent monitoring and the like as a supplement to the face recognition technology for continuously tracking pedestrians incapable of acquiring clear faces across the cameras. It has the difficulties of large intra-class differences (the apparent features of the same person may be very different), small inter-class differences (the apparent features of different persons may be very similar), etc. This is mainly due to factors such as camera shooting angle, illumination difference, pedestrian posture change, shielding and the like. Night scenes are also important fields in the fields of monitoring, security, and the like.
Most of the existing cross-mode pedestrian re-identification methods based on visible light and infrared images aim at solving the difference between two modes, and the solution thinking comprises the following steps: 1) A convolutional neural network with shared parameters is used for learning the shared characteristics between the two modes; 2) The correlation between modalities is learned by a training generator and a discriminant using a generated countermeasure network. However, most of the prior art is based on innovation of network structures, and often no specific problem or challenge of identifying the cross-mode pedestrians in the actual application scene is considered, the complexity of the network structures and the training time cost are increased, and good identification accuracy and effect are difficult to obtain in the actual application scene.
There is a need for a multi-modal cross-border tracking approach to the above problems.
Disclosure of Invention
The invention provides a multi-mode-based cross-border tracking method, which aims to solve the problems that in the prior art, the current cross-mode pedestrian re-recognition technology aims at eliminating the difference between two modes, but can lead to complex network structure, low calculation efficiency and poor recognition effect, and solves the problems of low accuracy, poor effect and low calculation efficiency in the process of cross-mode pedestrian re-recognition in an actual application scene by allowing the difference between two modes of visible light and infrared images to be reserved.
The invention provides a multi-mode-based cross-border tracking method, which comprises the following steps:
S1, obtaining a visible light image and an infrared image of the same target;
s2, detecting pedestrians in videos under different cameras by utilizing yolov target detection algorithm, and picking out to generate an image set pedestrian library;
S3, constructing a pedestrian re-identification model;
s4, setting a similarity threshold a;
S5, extracting features of the visible light image of the target to be queried, the infrared image of the target to be queried and feature vectors in an image set pedestrian library;
S6, carrying out similarity measurement on the visible light image of the target to be queried, the infrared image of the target to be queried and the feature vectors in the image set pedestrian library;
S7, obtaining the comprehensive similarity of the visible light mode according to the similarity of the returned result of the visible light mode and the preset weight of the visible light mode; obtaining the infrared mode comprehensive similarity according to the infrared mode similarity and the infrared preset weight;
S8, comparing the results of the two modes with the IOU, and sequencing the results according to the descending order of the comprehensive similarity;
S9, removing the same picture according to the camera ID and the picture name to obtain a final result.
The invention relates to a multi-mode-based cross-border tracking method, which is characterized in that in the step S6, similarity measurement is carried out between a visible light image of a target to be queried and a feature vector in an image set pedestrian library, wherein the method comprises the following steps:
for the visible light image of the target to be queried, using an image set pedestrian library as a search space, and carrying out similarity measurement on the feature vector of the visible light image of the target to be queried and the feature vector in the image set pedestrian library, wherein the similarity measurement is specifically as follows:
ds(qRGB,gj)=d(k)(qRGB,gj)
d s(qRGB,gj) represents K similarity distance measures between the target visible light mode image and the image set pedestrian library; q RGB represents the RGB modality image of the target query; g j represents a pedestrian gallery feature library;
the result of the set threshold value a is returned, specifically as follows:
Wherein, ψ RGB(qRGB,kq,ds,thRGB) represents the previous top_k q results that are greater than the set threshold; q RGB represents a query picture to be queried in a target RGB mode; k q denotes the first top_k q results in the results; d s denotes distance similarity; th RGB denotes a threshold value set in the RGB mode.
The invention relates to a multi-mode-based cross-border tracking method, which is characterized in that in the step S6, similarity measurement is carried out between an infrared image of a target to be queried and a feature vector in an image set pedestrian library, wherein the method comprises the following steps:
For an infrared image of a target to be queried, using an image set pedestrian library as a search space, and carrying out similarity measurement on feature vectors of the infrared image of the target to be queried and feature vectors in the image set pedestrian library, wherein the method comprises the following specific steps:
ds(qIR,gj)=d(k)(qIR,gj),
Wherein d s(qIR,gj) represents the distance similarity of the target infrared IR modality image to the kth pedestrian in the gallery pedestrian library; q IR represents an IR infrared modality image of the target query; g j represents a pedestrian gallery feature library;
the result of the set threshold value a is returned, specifically as follows:
Wherein, ψ IR(qIR,kq,ds,thIR) represents the previous top_k q results that are greater than the set threshold; q IR represents a query picture to be queried in a target IR infrared mode; k q denotes the first top_k q results in the results; d s denotes distance similarity; th IR represents a threshold value set in the IR infrared mode.
The invention relates to a multi-mode-based cross-border tracking method, which is characterized in that as a preferable mode, the similarity of a visible light mode return result and the comprehensive similarity of the visible light mode obtained by a preset weight of the visible light mode in step S7 are specifically as follows:
Wherein alpha represents a preset weight in an RGB mode; be IR(qIR,kq,ds,thIR) represents the weighted integrated similarity in the IR infrared mode.
The invention relates to a multi-mode-based cross-border tracking method, which is characterized in that as a preferable mode, the infrared mode comprehensive similarity obtained by the infrared mode similarity and the infrared preset weight in the step S7 is specifically as follows:
Wherein, beta represents a preset weight in an IR infrared mode; be RGB(qRGB,kq,ds,thRGB) represents the weighted integrated similarity in RGB modality.
According to the multi-mode-based cross-border tracking method, as an optimal mode, the pedestrian re-recognition model is used for carrying out feature extraction capability and feature discrimination on two modes of visible light and infrared images.
The invention relates to a multi-mode-based cross-border tracking method, which is characterized in that as a preferable mode, the results of two modes in step S8 are obtained and compared with an IOU, and the specific formulas are ordered according to the descending order of the comprehensive similarity:
Wherein D s represents the distance similarity of a group of results returned after the results of the two modes are subjected to cross-merging comparison and then are sorted according to the comprehensive similarity descending order.
The invention has the following beneficial effects:
(1) According to the technical scheme, two modes of a visible light image and an infrared image of the query target picture to be queried are obtained as input data sources, and corresponding mode data can be obtained in a mode of mode conversion, so that the diversity and the discriminant of the input data sources are improved;
(2) Allowing differences of different mode pictures in gallery pedestrian libraries to be reserved;
(3) Designing ReID models with strong feature extraction capability and discrimination capability aiming at a visible light mode and an infrared mode, and extracting feature vectors of multi-mode data;
(4) Searching the two modes of the query target to be searched in gallery pedestrian libraries respectively, and returning a result which is close to the similarity of the modes of the query target to be searched, thereby reducing the omission ratio of the target of the same suspected person;
(5) Returning the returned results of the two modes to the final result according to a preset weight and comprehensive similarity ordering mode, so that the recognition accuracy of the cross-mode pedestrian re-recognition is improved;
(6) According to the technical scheme, the accuracy of pedestrian re-identification is improved under the condition that the complexity of a network structure and the additional calculation efficiency are not increased;
(7) According to the technical scheme, network structure complexity and inter-mode differences caused by forcibly mapping different modes to the same feature space are avoided, and the diversity and the discriminant of features are increased by respectively searching the different modes of the picture to be searched, so that the performance of cross-mode pedestrian re-recognition is improved;
(8) According to the technical scheme, the output of the network result is optimized by means of the preset weight and the comprehensive similarity sequencing, and the accuracy and recall rate of Top-n can be effectively improved.
Drawings
Fig. 1 is a schematic diagram of a cross-border tracking method based on multiple modes.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1
As shown in fig. 1, a multi-mode-based cross-border tracking method includes the following steps:
S1, acquiring two-mode data of a visible light image and an infrared image of a query under the same target ID, for example, acquiring two-mode images of a suspected target through a video abstraction module in a cross-border tracking system and browsing an abstract result; if only the image under one mode can be acquired, acquiring the image under the other mode corresponding to the target in a mode of mode conversion;
s2, detecting pedestrians in videos under different cameras by utilizing yolov target detection algorithm, and picking out to generate an image set pedestrian library;
S3, constructing a pedestrian re-identification (ReID) model, training to complete a model with strong feature extraction capability and discrimination capability, extracting features of query two-mode data and gallery pedestrian library, wherein the query is Is used for the feature vector of (a), gallery is n×2048 eigenvectors;
s4, setting a similarity threshold a;
S5, extracting features of the visible light image of the target to be queried, the infrared image of the target to be queried and feature vectors in an image set pedestrian library;
S6, carrying out similarity measurement on the visible light image of the target to be queried, the infrared image of the target to be queried and the feature vectors in the image set pedestrian library;
S7, obtaining the comprehensive similarity of the visible light mode according to the similarity of the returned result of the visible light mode and the preset weight of the visible light mode; obtaining the infrared mode comprehensive similarity according to the infrared mode similarity and the infrared preset weight;
S8, comparing the results of the two modes with the IOU, and sequencing the results according to the descending order of the comprehensive similarity;
S9, the same picture can be retrieved and returned simultaneously in the visible light RGB mode and the infrared IR mode, and the same picture is removed according to the ID of the camera and the name of the picture, so that a final result is obtained.
In step S6, similarity measurement between the visible light image of the target to be queried and the feature vector in the image set pedestrian library specifically includes:
for the visible light image of the target to be queried, using an image set pedestrian library as a search space, and carrying out similarity measurement on the feature vector of the visible light image of the target to be queried and the feature vector in the image set pedestrian library, wherein the similarity measurement is specifically as follows:
ds(qRGB,gj)=d(k)(qRGB,gj)
d s(qRGB,gj) represents K similarity distance measures between the target visible light mode image and the image set pedestrian library; q RGB represents the RGB modality image of the target query; g j represents a pedestrian gallery feature library;
the result of the set threshold value a is returned, specifically as follows:
Wherein, ψ RGB(qRGB,kq,ds,thRGB) represents the previous top_k q results that are greater than the set threshold; q RGB represents a query picture to be queried in a target RGB mode; k q denotes the first top_k q results in the results; d s denotes distance similarity; th RGB represents a threshold value set in RGB mode
In step S6, the similarity measurement between the infrared image of the target to be queried and the feature vector in the image set pedestrian library specifically includes:
For an infrared image of a target to be queried, using an image set pedestrian library as a search space, and carrying out similarity measurement on feature vectors of the infrared image of the target to be queried and feature vectors in the image set pedestrian library, wherein the method comprises the following specific steps:
ds(qIR,gj)=d(k)(qIR,gj)
Wherein d s(qIR,gj) represents the distance similarity of the target infrared IR modality image to the kth pedestrian in the gallery pedestrian library; q IR represents an IR infrared modality image of the target query; g j represents a pedestrian gallery feature library;
the result of the set threshold value a is returned, specifically as follows:
Wherein, ψ IR(qIR,kq,ds,thIR) represents the previous top_k q results that are greater than the set threshold; q IR represents a query picture to be queried in a target IR infrared mode; k q denotes the first top_k q results in the results; d s denotes distance similarity; th IR represents a threshold value set in the IR infrared mode.
In step S7, the step of obtaining the comprehensive similarity of the visible light mode by the similarity of the returned result of the visible light mode and the preset weight of the visible light mode specifically includes:
Wherein alpha represents a preset weight in an RGB mode; the ψ IR(qIR,kq,ds,thIR) represents the weighted integrated similarity in the IR infrared state.
In the step S7, the infrared mode comprehensive similarity is obtained by the infrared mode similarity and the infrared preset weight specifically as follows:
Wherein, beta represents a preset weight in an IR infrared mode; be RGB(qRGB,kq,ds,thRGB) represents the weighted integrated similarity in RGB modality.
The pedestrian re-recognition model is used for carrying out feature extraction capability and feature discrimination on two modes of visible light and infrared images.
And S8, the results of the two modes are obtained and compared with the IOU, and the specific formulas are ordered according to the descending order of the comprehensive similarity as follows:
Wherein D s represents the distance similarity of a group of results returned after the results of the two modes are subjected to cross-merging comparison and then are sorted according to the comprehensive similarity descending order.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.
Claims (7)
1. A multi-mode-based cross-border tracking method is characterized by comprising the following steps of: the method comprises the following steps:
S1, obtaining a visible light image and an infrared image of the same target;
s2, detecting pedestrians in videos under different cameras by utilizing yolov target detection algorithm, and picking out to generate an image set pedestrian library;
S3, constructing a pedestrian re-identification model;
s4, setting a similarity threshold a;
S5, extracting features of the visible light image of the target to be queried, the infrared image of the target to be queried and feature vectors in an image set pedestrian library;
S6, carrying out similarity measurement on the visible light image of the target to be queried, the infrared image of the target to be queried and the feature vectors in the image set pedestrian library;
S7, obtaining the comprehensive similarity of the visible light mode according to the similarity of the returned result of the visible light mode and the preset weight of the visible light mode; obtaining the infrared mode comprehensive similarity according to the infrared mode similarity and the infrared preset weight;
S8, comparing the results of the two modes with the IOU, and sequencing the results according to the descending order of the comprehensive similarity;
S9, removing the same picture according to the camera ID and the picture name to obtain a final result.
2. The multi-modal based cross-border tracking method of claim 1, wherein: in the step S6, the similarity measurement between the visible light image of the target to be queried and the feature vector in the image set pedestrian library specifically includes:
For the visible light image of the target to be queried, using an image set pedestrian library as a search space, and carrying out similarity measurement on the feature vector of the visible light image of the target to be queried and the feature vector in the image set pedestrian library, wherein the similarity measurement is specifically as follows:
ds(qRGB,gj)=d(k)(qRGB,gj)
Wherein, d s(qRGB,gj) represents K similarity distance metrics between the target visible light mode image and the image set pedestrian library; q RGB represents an RGB modality image of the target query; the g j represents a pedestrian gallery feature library;
the result of the set threshold value a is returned, specifically as follows:
Wherein, ψ RGB(qRGB,kq,ds,thRGB) represents the previous top_k q results that are greater than the set threshold; q RGB represents a query picture to be queried in a target RGB mode; the k q represents the front top_k q results in the results; the d s represents distance similarity; the th RGB represents a threshold value set in the RGB mode.
3. The multi-modal based cross-border tracking method of claim 1, wherein: in the step S6, the similarity measurement between the infrared image of the target to be queried and the feature vector in the image set pedestrian library specifically includes:
for the infrared image of the target to be queried, using the image set pedestrian library as a search space, and carrying out similarity measurement on the feature vector of the infrared image of the target to be queried and the feature vector in the image set pedestrian library, wherein the similarity measurement is specifically as follows:
ds(qIR,gj)=d(k)(qIR,gj)
Wherein d s(qIR,gj) represents the distance similarity of the target infrared IR modality image to the kth pedestrian in the gallery pedestrian library; q IR represents an IR infrared modality image of the target query; the g j represents a pedestrian gallery feature library;
the result of the set threshold value a is returned, specifically as follows:
Wherein, ψ IR(qIR,kq,ds,thIR) represents the previous top_k q results that are greater than the set threshold; q IR represents a query picture to be queried in a target IR infrared mode; the k q represents the front top_k q results in the results; the d s represents distance similarity; the th IR represents a threshold set in the IR infrared mode.
4. The multi-modal based cross-border tracking method of claim 2, wherein:
the step S7 of obtaining the comprehensive similarity of the visible light mode by the similarity of the returned result of the visible light mode and the preset weight of the visible light mode specifically includes:
Wherein, alpha represents a preset weight in RGB mode; the ψ RGB(qRGB,kq,ds,thRGB) represents the weighted integrated similarity in RGB modality.
5. The multi-modal based cross-border tracking method of claim 4, wherein:
the step S7 of obtaining the infrared mode comprehensive similarity by the infrared mode similarity and the infrared preset weight specifically comprises the following steps:
Wherein, beta represents a preset weight in an IR infrared mode; the ψ IR(qIR,kq,ds,thIR) represents the weighted integrated similarity in the IR infrared state.
6. The multi-modal based cross-border tracking method of claim 1, wherein:
the pedestrian re-recognition model is used for carrying out feature extraction capability and feature discrimination on two modes of visible light and infrared images.
7. The multi-modal based cross-border tracking method of claim 5, wherein: and step S8, the results of the two modes are obtained and compared with the IOU, and the specific formulas are ordered according to the descending order of the comprehensive similarity as follows:
Wherein D s represents the distance similarity of a group of results returned after the results of the two modes are subjected to cross-merging comparison and then are sorted according to the comprehensive similarity descending order.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111025677.7A CN113837024B (en) | 2021-09-02 | 2021-09-02 | A cross-border tracking method based on multimodality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111025677.7A CN113837024B (en) | 2021-09-02 | 2021-09-02 | A cross-border tracking method based on multimodality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837024A CN113837024A (en) | 2021-12-24 |
CN113837024B true CN113837024B (en) | 2024-11-22 |
Family
ID=78962049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111025677.7A Active CN113837024B (en) | 2021-09-02 | 2021-09-02 | A cross-border tracking method based on multimodality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837024B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875588A (en) * | 2018-05-25 | 2018-11-23 | 武汉大学 | Across camera pedestrian detection tracking based on deep learning |
CN112257619A (en) * | 2020-10-27 | 2021-01-22 | 北京澎思科技有限公司 | Target re-identification method, device, equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8611591B2 (en) * | 2007-12-21 | 2013-12-17 | 21 Ct, Inc. | System and method for visually tracking with occlusions |
JP7132046B2 (en) * | 2018-09-13 | 2022-09-06 | 株式会社東芝 | SEARCH DEVICE, SEARCH METHOD AND PROGRAM |
CN112016401B (en) * | 2020-08-04 | 2024-05-17 | 杰创智能科技股份有限公司 | Cross-mode pedestrian re-identification method and device |
CN113034550B (en) * | 2021-05-28 | 2021-08-10 | 杭州宇泛智能科技有限公司 | Cross-mirror pedestrian trajectory tracking method, system, electronic device and storage medium |
CN113283362B (en) * | 2021-06-04 | 2024-03-22 | 中国矿业大学 | A cross-modal person re-identification method |
CN113313188B (en) * | 2021-06-10 | 2022-04-12 | 四川大学 | A cross-modal fusion target tracking method |
-
2021
- 2021-09-02 CN CN202111025677.7A patent/CN113837024B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875588A (en) * | 2018-05-25 | 2018-11-23 | 武汉大学 | Across camera pedestrian detection tracking based on deep learning |
CN112257619A (en) * | 2020-10-27 | 2021-01-22 | 北京澎思科技有限公司 | Target re-identification method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113837024A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kumar et al. | The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices | |
CN111104867B (en) | Recognition model training and vehicle re-recognition method and device based on part segmentation | |
CN113989851B (en) | Cross-modal pedestrian re-identification method based on heterogeneous fusion graph convolution network | |
US8116534B2 (en) | Face recognition apparatus and face recognition method | |
CN110070066A (en) | A kind of video pedestrian based on posture key frame recognition methods and system again | |
CN109508663A (en) | A kind of pedestrian's recognition methods again based on multi-level supervision network | |
Shi et al. | IRANet: Identity-relevance aware representation for cloth-changing person re-identification | |
CN112507853B (en) | Cross-modal pedestrian re-recognition method based on mutual attention mechanism | |
CN111539257B (en) | Person re-identification method, device and storage medium | |
Xia et al. | Face occlusion detection using deep convolutional neural networks | |
CN111126249A (en) | A pedestrian re-identification method and device combining big data and Bayesian | |
CN118799919B (en) | Full-time multi-mode pedestrian re-recognition method based on simulation augmentation and prototype learning | |
Islam et al. | Representation for action recognition with motion vector termed as: SDQIO | |
CN111104911A (en) | Pedestrian re-identification method and device based on big data training | |
Zhu et al. | Evidential detection and tracking collaboration: New problem, benchmark and algorithm for robust anti-uav system | |
CN114627500B (en) | A cross-modal person re-identification method based on convolutional neural network | |
Fung-Lung et al. | An image acquisition method for face recognition and implementation of an automatic attendance system for events | |
Wang et al. | A classwise vulnerable part detection method for military targets | |
CN113837024B (en) | A cross-border tracking method based on multimodality | |
Narang et al. | Robust face recognition method based on SIFT features using Levenberg-Marquardt Backpropagation neural networks | |
Xiaoyu et al. | Infrared human face auto locating based on SVM and a smart thermal biometrics system | |
Aglasia et al. | Image sketch based criminal face recognition using content based image retrieval | |
CN114627506B (en) | Person Re-ID Method Based on Pose Estimation and Non-local Network | |
Yamashita et al. | Facial point detection using convolutional neural network transferred from a heterogeneous task | |
CN116580287A (en) | Cross-modal place recognition method based on global and local feature joint constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: Room 202, Building 12, No. 58 Dongbeiwang West Road, Haidian District, Beijing 100193 Patentee after: Beijing xinorange Smart Technology Development Co.,Ltd. Country or region after: China Address before: Room a316, floor 3, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085 Patentee before: Beijing xinorange Smart Technology Development Co.,Ltd. Country or region before: China |