[go: up one dir, main page]

CN111815677A - Target tracking method, device, terminal device and readable storage medium - Google Patents

Target tracking method, device, terminal device and readable storage medium Download PDF

Info

Publication number
CN111815677A
CN111815677A CN202010661194.5A CN202010661194A CN111815677A CN 111815677 A CN111815677 A CN 111815677A CN 202010661194 A CN202010661194 A CN 202010661194A CN 111815677 A CN111815677 A CN 111815677A
Authority
CN
China
Prior art keywords
template
sample set
detection
candidate
preset size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010661194.5A
Other languages
Chinese (zh)
Other versions
CN111815677B (en
Inventor
衣杨
赵小蕾
陈嘉谦
邱泽敏
刘东琳
陈怡华
李宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua College of Sun Yat Sen University
Original Assignee
Xinhua College of Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua College of Sun Yat Sen University filed Critical Xinhua College of Sun Yat Sen University
Priority to CN202010661194.5A priority Critical patent/CN111815677B/en
Publication of CN111815677A publication Critical patent/CN111815677A/en
Application granted granted Critical
Publication of CN111815677B publication Critical patent/CN111815677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种目标追踪方法、装置、终端设备和可读存储介质,该方法包括利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图;利用特征金字塔网络从检测帧中提取n个不同尺度的检测特征图;利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置;当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。本方案实现对小目标的准确跟踪。

Figure 202010661194

Embodiments of the present invention disclose a target tracking method, device, terminal device, and readable storage medium. The method includes using a feature pyramid network to extract n template feature maps of different scales from a template frame; Extracting n detection feature maps of different scales from the The preset number of candidate images, and the scores and positions corresponding to each candidate image; when the n region candidate sub-networks complete the processing of the n template feature maps and the corresponding detection feature maps, according to each The score of the selected image determines the top m candidate images with the highest score among all the candidate images as the tracking target, and the position corresponding to the tracking target is used as the target position. This scheme achieves accurate tracking of small targets.

Figure 202010661194

Description

目标追踪方法、装置、终端设备和可读存储介质Target tracking method, device, terminal device and readable storage medium

技术领域technical field

本发明涉及目标追踪领域,尤其涉及一种目标追踪方法、装置、终端设备和可读存储介质。The present invention relates to the field of target tracking, and in particular, to a target tracking method, device, terminal device and readable storage medium.

背景技术Background technique

近年来,深度学习开始进军目标跟踪领域,吸引了越来越多的学者的目光。跟踪图像的低层特征有较高的分辨率,方便对目标进行精准的定位,而高层特征则蕴含着更多的语义信息,能够处理较大的目标变化并防止跟踪器出现漂移现象,更方便对目标进行范围定位,因此利用深度学习可以更好的提取目标的特征,对目标进行更好的表达。然而,由于深度学习存在着训练样本较大以及在线更新时间较长的问题,在跟踪的时效性方面依然存在着不小的挑战。In recent years, deep learning has begun to enter the field of target tracking, attracting the attention of more and more scholars. The low-level features of the tracking image have higher resolution, which is convenient for accurate target positioning, while the high-level features contain more semantic information, which can handle large target changes and prevent the tracker from drifting, which is more convenient for tracking. The target is located in the range, so using deep learning can better extract the features of the target and express the target better. However, due to the problems of large training samples and long online update time in deep learning, there are still many challenges in the timeliness of tracking.

发明内容SUMMARY OF THE INVENTION

鉴于上述问题,本发明提出一种目标追踪方法、装置、终端设备和可读存储介质。In view of the above problems, the present invention provides a target tracking method, apparatus, terminal device and readable storage medium.

本发明的第一个实施例提出一种目标追踪方法,该方法包括:The first embodiment of the present invention proposes a target tracking method, which includes:

利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图;Extract n template feature maps of different scales from template frames using feature pyramid network;

利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图;Using the feature pyramid network to extract from the detection frame n detection feature maps of different scales corresponding to the scales of the n template feature maps one-to-one;

利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置;Using the ith region candidate sub-network to determine a preset number of candidate images on the detection feature map according to the ith template feature map and the detection feature map with the same scale as the ith template feature map, and The score and position corresponding to each candidate image;

当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。After the n region candidate sub-networks finish processing the n template feature maps and the corresponding detection feature maps, determine the top m candidate images with the highest scores among all the candidate images according to the score of each candidate image is used as the tracking target, and the position corresponding to the tracking target is used as the target position.

本发明的第二个实施例提出的目标追踪方法,所述利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像,以及各个备选图像对应的分值和位置,包括:In the target tracking method proposed by the second embodiment of the present invention, the i-th region candidate sub-network is used in the i-th template feature map and the detection feature map with the same scale as the i-th template feature map. Determine the preset number of candidate images on the detection feature map, as well as the scores and positions corresponding to each candidate image, including:

将所述第i个模板特征图进行卷积操作以获取第一预设尺寸的模板样本集和第二预设尺寸的模板样本集;The i-th template feature map is subjected to a convolution operation to obtain a template sample set of a first preset size and a template sample set of a second preset size;

将与所述第i个模板特征图尺度相同的检测特征图进行卷积操作以获取第三预设尺寸的检测样本集和第四预设尺寸的检测样本集;Performing a convolution operation on the detection feature map with the same scale as the i-th template feature map to obtain a detection sample set of a third preset size and a detection sample set of a fourth preset size;

所述第i个区域候选子网络的分类分支根据所述第一预设尺寸的模板样本集和所述第三预设尺寸的检测样本集计算所述第三预设尺寸的检测样本集中各个备选图像的分值;The classification branch of the ith region candidate sub-network calculates each device in the detection sample set of the third preset size according to the template sample set of the first preset size and the detection sample set of the third preset size. The score of the selected image;

所述第i个区域候选子网络的回归分支根据所述第二预设尺寸的模板样本集和所述第四预设尺寸的检测样本集确定所述第四预设尺寸的检测样本集中各个样本的位置;The regression branch of the i-th region candidate sub-network determines each sample in the detection sample set of the fourth preset size according to the template sample set of the second preset size and the detection sample set of the fourth preset size s position;

根据所述第四预设尺寸的检测样本集中各个样本的位置确定所述各个备选图像对应的位置,所述第三预设尺寸的检测样本集和第四预设尺寸的检测样本集等同。The position corresponding to each candidate image is determined according to the position of each sample in the detection sample set of the fourth preset size, and the detection sample set of the third preset size is equivalent to the detection sample set of the fourth preset size.

本发明的第三个实施例提出的目标追踪方法,还包括:The target tracking method proposed by the third embodiment of the present invention further includes:

确定所述前m个备选图像对应的目标位置的位置响应值;determining the position response value of the target position corresponding to the first m candidate images;

当最大的位置响应值大于预设的响应阈值时,将所述最大的位置响应值对应的备选图像作为所述模板帧。When the maximum position response value is greater than a preset response threshold, the candidate image corresponding to the maximum position response value is used as the template frame.

上述的目标追踪方法,所述响应值根据以下公式计算:In the above-mentioned target tracking method, the response value is calculated according to the following formula:

Figure BDA0002578585200000031
其中,
Figure BDA0002578585200000036
表示所述位置响应值,t*表示所述目标位置,y(t*)表示所述目标位置t*的响应结果,t表示距离所述目标位置最近的干扰位置,y(t)表示干扰位置t的响应结果,Δ是二次连续可微函数。
Figure BDA0002578585200000031
in,
Figure BDA0002578585200000036
represents the position response value, t * represents the target position, y(t * ) represents the response result of the target position t * , t represents the closest interference position to the target position, and y(t) represents the interference position The response result of t, Δ is a quadratic continuously differentiable function.

上述实施例所述的目标追踪方法,利用训练样本集预先训练所述目标追踪方法对应的目标追踪模型,直到目标追踪模型的误差损失小于预设误差阈值;In the target tracking method described in the above embodiment, the target tracking model corresponding to the target tracking method is pre-trained by using the training sample set, until the error loss of the target tracking model is less than a preset error threshold;

所述误差损失利用如下损失函数计算:The error loss is calculated using the following loss function:

Figure BDA0002578585200000032
其中,
Figure BDA0002578585200000033
表示误差损失,βi表示第i个区域候选子网络的加权系数,μ表示衰减参数,
Figure BDA0002578585200000034
表示n个区域候选子网络的加权后的加权响应值;
Figure BDA0002578585200000032
in,
Figure BDA0002578585200000033
represents the error loss, β i represents the weighting coefficient of the ith region candidate sub-network, μ represents the attenuation parameter,
Figure BDA0002578585200000034
Represents the weighted weighted response value of the n regional candidate subnetworks;

所述加权后的加权响应值计算公式如下:The weighted weighted response value calculation formula is as follows:

Figure BDA0002578585200000035
其中,yβ(t*)表示所述目标位置t*的加权后的响应结果,t表示距离所述目标位置最近的干扰位置,yβ(t)表示干扰位置t的加权后响应结果,Δ是二次连续可微函数;
Figure BDA0002578585200000035
Among them, y β (t * ) represents the weighted response result of the target position t * , t represents the interference position closest to the target position, y β (t) represents the weighted response result of the interference position t, Δ is a quadratic continuously differentiable function;

所述加权后的响应结果计算公式如下:The calculation formula of the weighted response result is as follows:

Figure BDA0002578585200000041
si(t)表示第i个区域候选子网络响应结果。
Figure BDA0002578585200000041
s i (t) represents the response result of the ith region candidate sub-network.

上述实施例所述的目标追踪方法,所述n个不同尺度包括32×32个像素点尺度、64×64个像素点尺度、128×128个像素点尺度和256×256个像素点尺度中的至少一种。In the target tracking method described in the above embodiment, the n different scales include 32×32 pixel scales, 64×64 pixel scales, 128×128 pixel scales, and 256×256 pixel scales. at least one.

本发明的第四个实施例提出一种目标追踪装置,该装置包括:A fourth embodiment of the present invention provides a target tracking device, which includes:

模板特征图获取模块,用于利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图;The template feature map acquisition module is used to extract n template feature maps of different scales from the template frame by using the feature pyramid network;

检测特征图获取模块,用于利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图;The detection feature map acquisition module is used to extract from the detection frame n detection feature maps of different scales corresponding to the scales of the n template feature maps one-to-one by using the feature pyramid network;

备选图像确定模块,用于利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置;The candidate image determination module is used to determine the pre-equipment on the detection feature map according to the ith template feature map and the detection feature map with the same scale as the ith template feature map using the ith region candidate sub-network Select a number of candidate images, and the scores and positions corresponding to each candidate image;

跟踪目标确定模块,用于当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。The tracking target determination module is used to determine the highest score among all the candidate images according to the score of each candidate image after the n region candidate sub-networks have completed the processing of the n template feature maps and the corresponding detection feature maps The first m candidate images are used as the tracking target, and the position corresponding to the tracking target is used as the target position.

上述备选图像确定模块,包括:The above-mentioned alternative image determination module includes:

模板样本集获取单元,用于将所述第i个模板特征图进行卷积操作以获取第一预设尺寸的模板样本集和第二预设尺寸的模板样本集;a template sample set obtaining unit, configured to perform a convolution operation on the i-th template feature map to obtain a template sample set of a first preset size and a template sample set of a second preset size;

检测样本集获取单元,用于将与所述第i个模板特征图尺度相同的检测特征图进行卷积操作以获取第三预设尺寸的检测样本集和第四预设尺寸的检测样本集;a detection sample set acquisition unit, configured to perform a convolution operation on the detection feature map with the same scale as the i-th template feature map to obtain a detection sample set of a third preset size and a detection sample set of a fourth preset size;

备选图像分值计算单元,用于所述第i个区域候选子网络的分类分支根据所述第一预设尺寸的模板样本集和所述第三预设尺寸的检测样本集计算所述第三预设尺寸的检测样本集中各个备选图像的分值;The candidate image score calculation unit is used for the classification branch of the ith region candidate sub-network to calculate the ith sample set according to the template sample set of the first preset size and the detection sample set of the third preset size. The scores of each candidate image in the detection sample set of three preset sizes;

样本位置确定单元,用于所述第i个区域候选子网络的回归分支根据所述第二预设尺寸的模板样本集和所述第四预设尺寸的检测样本集确定所述第四预设尺寸的检测样本集中各个样本的位置;A sample position determination unit, used for the regression branch of the ith region candidate sub-network to determine the fourth preset according to the template sample set of the second preset size and the detection sample set of the fourth preset size The position of each sample in the detection sample set of the size;

备选图像位置确定单元,用于根据所述第四预设尺寸的检测样本集中各个样本的位置确定所述各个备选图像对应的位置,所述第三预设尺寸的检测样本集和第四预设尺寸的检测样本集等同。A candidate image position determination unit, configured to determine the position corresponding to each candidate image according to the position of each sample in the detection sample set of the fourth preset size, the detection sample set of the third preset size and the fourth preset size. The detection sample sets of the preset size are equivalent.

上述实施例涉及一种终端设备,包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器运行所述计算机程序以使所述终端设备能执行上述的目标追踪方法。The above-mentioned embodiment relates to a terminal device, including a memory and a processor, where the memory is used for storing a computer program, and the processor runs the computer program to enable the terminal device to execute the above-mentioned target tracking method.

上述实施例涉及一种可读存储介质,其存储有计算机程序,所述计算机程序在处理器上运行时执行上述的目标追踪方法。The above-mentioned embodiments relate to a readable storage medium storing a computer program, the computer program executing the above-mentioned target tracking method when running on a processor.

本发明利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图;利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图;利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置;当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。本发明的技术方案一方面,将特征金字塔网络作为跟踪框架的特征提取层,有效地融合了低层的高分辨率信息和高层的高语义信息,可以更加精准的定位目标位置,在小目标跟踪物体上的跟踪表现尤为突出;另一方面通过改进的区域候选子网络对跟踪目标进行筛选,实现对小目标的准确跟踪。The present invention uses the feature pyramid network to extract n template feature maps of different scales from the template frame; uses the feature pyramid network to extract from the detection frame n detection features of different scales corresponding to the scales of the n template feature maps one-to-one Figure; Utilize the ith regional candidate sub-network to determine the preset number of candidate images on the detection feature map according to the ith template feature map and the detection feature map with the same scale as the ith template feature map , and the corresponding scores and positions of each candidate image; when the n regional candidate sub-networks process the n described template feature maps and the corresponding detection feature maps, according to the score of each candidate image, in all candidate images The top m candidate images with the highest scores are determined as the tracking target, and the position corresponding to the tracking target is used as the target position. On the one hand, the technical solution of the present invention uses the feature pyramid network as the feature extraction layer of the tracking framework, which effectively integrates low-level high-resolution information and high-level high-semantic information, which can more accurately locate the target position and track objects in small targets. The tracking performance is particularly outstanding; on the other hand, the tracking targets are screened through the improved region candidate sub-network to achieve accurate tracking of small targets.

附图说明Description of drawings

为了更清楚地说明本发明的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对本发明保护范围的限定。在各个附图中,类似的构成部分采用类似的编号。In order to illustrate the technical solutions of the present invention more clearly, the accompanying drawings required in the embodiments will be briefly introduced below. It should be understood that the following drawings only show some embodiments of the present invention, and therefore should not be It is regarded as the limitation of the protection scope of the present invention. In the various figures, similar components are numbered similarly.

图1示出了本发明实施例一种目标追踪方法的流程示意图;1 shows a schematic flowchart of a target tracking method according to an embodiment of the present invention;

图2示出了本发明实施例一种目标追踪模型的结构示意图;2 shows a schematic structural diagram of a target tracking model according to an embodiment of the present invention;

图3示出了本发明实施例确定备选图像分值和位置的流程示意图;FIG. 3 shows a schematic flowchart of determining candidate image scores and positions according to an embodiment of the present invention;

图4示出了本发明实施例一种区域候选子网络模型的结构示意图;4 shows a schematic structural diagram of an area candidate sub-network model according to an embodiment of the present invention;

图5示出了本发明实施例另一种目标追踪方法的流程示意图;5 shows a schematic flowchart of another target tracking method according to an embodiment of the present invention;

图6示出了本发明实施例一种目标追踪装置的结构示意图;FIG. 6 shows a schematic structural diagram of a target tracking device according to an embodiment of the present invention;

图7示出了本发明实施例确定备选图像分值和位置的结构示意图。FIG. 7 shows a schematic structural diagram of determining the score and position of an alternative image according to an embodiment of the present invention.

主要元件符号说明:Description of main component symbols:

1-目标追踪装置;100-模板特征图获取模块;200-检测特征图获取模块;300-备选图像确定模块;400-跟踪目标确定模块;500-响应值计算模块;600-模板帧更新模块;310-模板样本集获取单元;320-检测样本集获取单元;330-备选图像分值计算单元;340-样本位置确定单元;350-备选图像位置确定单元。1-target tracking device; 100-template feature map acquisition module; 200-detection feature map acquisition module; 300-candidate image determination module; 400-tracking target determination module; 500-response value calculation module; 600-template frame update module 310-template sample set acquisition unit; 320-detection sample set acquisition unit; 330-candidate image score calculation unit; 340-sample position determination unit; 350-candidate image position determination unit.

具体实施方式Detailed ways

下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments.

通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Thus, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the invention as claimed, but is merely representative of selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present invention.

在下文中,可在本发明的各种实施例中使用的术语“包括”、“具有”及其同源词仅意在表示特定特征、数字、步骤、操作、元件、组件或前述项的组合,并且不应被理解为首先排除一个或更多个其它特征、数字、步骤、操作、元件、组件或前述项的组合的存在或增加一个或更多个特征、数字、步骤、操作、元件、组件或前述项的组合的可能性。Hereinafter, the terms "comprising", "having" and their cognates, which may be used in various embodiments of the present invention, are only intended to denote particular features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the presence of or adding one or more other features, numbers, steps, operations, elements, components or combinations of the foregoing or the possibility of a combination of the foregoing.

此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。Furthermore, the terms "first", "second", "third", etc. are only used to differentiate the description and should not be construed as indicating or implying relative importance.

除非另有限定,否则在这里使用的所有术语(包括技术术语和科学术语)具有与本发明的各种实施例所属领域普通技术人员通常理解的含义相同的含义。所述术语(诸如在一般使用的词典中限定的术语)将被解释为具有与在相关技术领域中的语境含义相同的含义并且将不被解释为具有理想化的含义或过于正式的含义,除非在本发明的各种实施例中被清楚地限定。Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of this invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having the same meaning as the contextual meaning in the relevant technical field and will not be interpreted as having an idealized or overly formal meaning, unless explicitly defined in the various embodiments of the present invention.

本发明是对现有的三项技术进行改进而提出的目标追踪方法,包括孪生神经网络结构、特征金字塔网络和区域候选网络。The invention is a target tracking method proposed by improving the existing three technologies, including a twin neural network structure, a feature pyramid network and a region candidate network.

孪生网络结构,是指网络的主体结构由上下两个分支构成,两个分支共享同一网络的所有权值,如同孪生双胞胎一样,常用于解决类别很多或者不确定、但是每个类别下面的样本数较少的分类问题。在视觉目标跟踪领域,孪生网络的上分支为用于提取模板帧的外观特征的模板分支(Template branch),下分支是检测分支(Detection branch),其输入是根据上一帧的跟踪结果在当前帧上截取的用于搜索的候选区域,经过相同的网络后,用模版分支的特征图与当前帧的多个候选区域的特征图进行相似度计算,将得分最高的候选区域作为当前帧的跟踪结果。Siamese network structure means that the main structure of the network consists of upper and lower branches, and the two branches share the ownership value of the same network. Like the twins, it is often used to solve many categories or uncertainties, but the number of samples under each category is relatively small. fewer classification problems. In the field of visual target tracking, the upper branch of the Siamese network is the template branch used to extract the appearance features of the template frame, and the lower branch is the detection branch, whose input is based on the tracking result of the previous frame. The candidate regions intercepted on the frame for search, after passing through the same network, use the feature map of the template branch and the feature maps of multiple candidate regions of the current frame to calculate the similarity, and use the candidate region with the highest score as the tracking of the current frame result.

特征金字塔通常指针对输入的单张图像可得到多张不一样尺度的图像,将这些不同尺度的图像的4个顶点连接起来构造出一个类似真实金字塔的图像金字塔。在目标跟踪领域使用的特征金字塔网络(Feature Pyramid Network,FPN)可以为二维图像增加一个尺度维度(或者称之为深度),不同于传统的检测算法仅仅采用顶层特征做预测,特征金字塔网络可对不同层级的特征进行融合,同时也在不同的特征层独立进行预测,从而获得更加鲁棒的语义信息。通过其自身自底向上线路和自顶向下线路,特征金字塔网络充分利用了低层的高分辨率信息和高层的高语义信息,特别对于小目标而言,特征金字塔网络还增加了特征映射的分辨率,可以获得更多关于小目标的有用信息。The feature pyramid usually refers to obtaining multiple images of different scales for a single input image, and connecting the four vertices of these images of different scales to construct an image pyramid similar to the real pyramid. The Feature Pyramid Network (FPN) used in the field of target tracking can add a scale dimension (or depth) to the two-dimensional image. Unlike traditional detection algorithms that only use top-level features for prediction, the Feature Pyramid Network can The features of different levels are fused and predicted independently at different feature layers to obtain more robust semantic information. Through its own bottom-up line and top-down line, the feature pyramid network makes full use of the low-level high-resolution information and high-level high-semantic information, especially for small targets, the feature pyramid network also increases the resolution of feature maps. rate for more useful information about small goals.

区域候选网络(Region Proposal Network,RPN)是用于提取候选框的网络,最早出现在Faster RCNN结构中。其使用候选框(也叫做锚框(Anchor)技术),在计算机视觉中通常用于表示固定的参考框。在目标跟踪任务中,跟踪的目标存在类别不确定、位置不确定、尺度不确定等特点,改进的锚框技术可以通过预先设立一组不同尺度、不同位置的固定参考框,覆盖大约全部的位置和尺度,每个固定参考框负责检测与其交并比大于预设阈值的目标,区域候选网络不仅识别效果好,而且识别速度快。The Region Proposal Network (RPN) is a network for extracting candidate boxes, which first appeared in the Faster RCNN structure. It uses a candidate box (also called anchor box (Anchor) technique), which is often used in computer vision to represent a fixed reference box. In the target tracking task, the tracked target has the characteristics of uncertain category, position uncertainty, scale uncertainty, etc. The improved anchor frame technology can cover about all positions by pre-establishing a set of fixed reference frames of different scales and different positions. and scale, each fixed reference frame is responsible for detecting the target whose intersection ratio is greater than the preset threshold. The regional candidate network not only has a good recognition effect, but also has a fast recognition speed.

实施例1Example 1

本实施例,参见图1,示出了一种目标追踪方法包括以下步骤:This embodiment, referring to FIG. 1 , shows a target tracking method including the following steps:

步骤S100:利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图。Step S100 : extracting n template feature maps of different scales from the template frame by using the feature pyramid network.

在目标追踪的起始阶段,可以将视频帧中的第一帧作为模板帧,利用特征金字塔网络(FPN)从模板帧中提取n个不同尺度的模板特征图。示范性的,可以参见图2,特征金字塔网络(FPN)从模板帧中提取4个不同尺度的模板特征图,包括32×32个像素点尺度的模板特征图、64×64个像素点尺度的模板特征图、128×128个像素点尺度的模板特征图和256×256个像素点尺度的模板特征图。In the initial stage of target tracking, the first frame in the video frame can be used as the template frame, and the feature pyramid network (FPN) can be used to extract n template feature maps of different scales from the template frame. Exemplarily, referring to Figure 2, the Feature Pyramid Network (FPN) extracts 4 template feature maps of different scales from the template frame, including 32×32 pixel scale template feature maps, 64×64 pixel scale Template feature map, template feature map with 128×128 pixel scale, and template feature map with 256×256 pixel scale.

应当理解,n为正整数,可以设置为3、4、5、6等,可以根据目标追踪的实际效果进行调节,模板特征图的尺度大小也可以根据具体的需求灵活设置。It should be understood that n is a positive integer, which can be set to 3, 4, 5, 6, etc., which can be adjusted according to the actual effect of target tracking, and the scale of the template feature map can also be flexibly set according to specific needs.

步骤S200:利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图。Step S200 : extracting n detection feature maps of different scales corresponding to the scales of the n template feature maps one-to-one from the detection frame by using a feature pyramid network.

检测帧为需要获取跟踪目标的当前帧,利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图。示范性的,可以参见图2,特征金字塔网络(FPN)从检测帧中提取4个不同尺度的检测特征图,包括32×32个像素点尺度的检测特征图、64×64个像素点尺度的检测特征图、128×128个像素点尺度的检测特征图和256×256个像素点尺度的检测特征图。The detection frame is the current frame where the tracking target needs to be acquired, and the feature pyramid network is used to extract from the detection frame n detection feature maps of different scales corresponding to the scales of the n template feature maps one-to-one. Exemplarily, see Fig. 2, the Feature Pyramid Network (FPN) extracts 4 detection feature maps of different scales from the detection frame, including 32×32 pixel scale detection feature maps, 64×64 Detection feature maps, 128×128 pixel-scale detection feature maps, and 256×256 pixel-scale detection feature maps.

步骤S300:利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置。Step S300: Determine a preset number of candidates on the detection feature map according to the ith template feature map and the detection feature map with the same scale as the ith template feature map using the ith region candidate sub-network. image, and the corresponding score and position of each candidate image.

示范性的,可以参见图2,当n为4时,对应4个区域候选子网络(RPN),32×32个像素点尺度的模板特征图和32×32个像素点尺度的检测特征图作为第一个区域候选子网络(RPN)的输入,64×64个像素点尺度的模板特征图和64×64个像素点尺度的检测特征图作为第二个区域候选子网络(RPN)的输入,128×128个像素点尺度的模板特征图和128×128个像素点尺度的检测特征图作为第三个区域候选子网络(RPN)的输入,256×256个像素点尺度的模板特征图和256×256个像素点尺度的检测特征图作为第四个区域候选子网络(RPN)的输入。Exemplarily, see Figure 2, when n is 4, corresponding to 4 region candidate sub-networks (RPN), the template feature map of 32×32 pixel point scale and the detection feature map of 32×32 pixel point scale are used as The input of the first region candidate sub-network (RPN), the template feature map of 64 × 64 pixel scale and the detection feature map of 64 × 64 pixel scale are used as the input of the second region candidate sub-network (RPN), The template feature map of 128×128 pixel point scale and the detection feature map of 128×128 pixel point scale are used as the input of the third region candidate sub-network (RPN), the template feature map of 256×256 pixel point scale and 256 The detection feature map of ×256 pixel scale is used as the input of the fourth region candidate sub-network (RPN).

4个区域候选子网络(RPN)分别在对应的检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置。可以在每一尺度的检测特征图上预先设置3种长宽比例的锚框,以覆盖检测特征图中可能存在的追踪目标,3种长宽比例包括1:2、1:1和2:1。对应的,4个尺度的检测特征图包括12个预设的锚框。The four region candidate sub-networks (RPNs) respectively determine a preset number of candidate images on the corresponding detection feature maps, as well as the scores and positions corresponding to each candidate image. Three kinds of anchor boxes with aspect ratios can be preset on the detection feature map of each scale to cover the possible tracking targets in the detection feature map. The three aspect ratios include 1:2, 1:1 and 2:1 . Correspondingly, the detection feature maps of 4 scales include 12 preset anchor boxes.

可以利用区域候选子网络(RPN)的分类分支计算各个锚框的分类分值,利用区域候选子网络(RPN)的回归分支确定各个锚框的回归位置,应当理解,每一锚框包含一备选图像,每一锚框的回归位置为对应的备选图像的位置,每一锚框的分类分值为对应的备选图像的分值。The classification score of each anchor frame can be calculated by the classification branch of the regional candidate sub-network (RPN), and the regression position of each anchor frame can be determined by the regression branch of the regional candidate sub-network (RPN). It should be understood that each anchor frame contains a Select an image, the regression position of each anchor frame is the position of the corresponding candidate image, and the classification score of each anchor frame is the score of the corresponding candidate image.

步骤S400:当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。Step S400: After the n region candidate sub-networks complete the processing of the n template feature maps and the corresponding detection feature maps, determine the top m with the highest score among all the candidate images according to the score of each candidate image. The candidate image is used as the tracking target, and the position corresponding to the tracking target is used as the target position.

根据各个备选图像对应的分值将各个备选图像按照分值从高到低依次排序,确定分值最高的前m个备选图像的作为跟踪目标。其中,m为预先设置的正整数。According to the score corresponding to each candidate image, the candidate images are sorted in descending order according to the score, and the top m candidate images with the highest score are determined as the tracking target. Among them, m is a preset positive integer.

上述i小于等于n。The above i is less than or equal to n.

本实施例利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图;利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图;利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置;当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。本实施例的技术方案一方面,将特征金字塔网络作为跟踪框架的特征提取层,有效地融合了低层的高分辨率信息和高层的高语义信息,可以更加精准的定位目标位置,在小目标跟踪物体上的跟踪表现尤为突出;另一方面通过改进的区域候选子网络对跟踪目标进行筛选,实现对小目标的准确跟踪。In this embodiment, the feature pyramid network is used to extract n template feature maps of different scales from the template frame; Feature map; use the ith region candidate sub-network to determine a preset number of candidates on the detection feature map according to the ith template feature map and the detection feature map with the same scale as the ith template feature map image, and the scores and positions corresponding to each candidate image; when the n region candidate sub-networks process the n template feature maps and the corresponding detection feature maps, the score of each candidate image is The top m candidate images with the highest determined scores in the images are selected as the tracking target, and the position corresponding to the tracking target is used as the target position. On the one hand, the technical solution of this embodiment uses the feature pyramid network as the feature extraction layer of the tracking framework, which effectively integrates the low-level high-resolution information and the high-level high-semantic information, which can locate the target position more accurately, and can track small objects more accurately. The tracking performance on objects is particularly outstanding; on the other hand, the tracking targets are screened through the improved region candidate sub-network to achieve accurate tracking of small targets.

实施例2Example 2

区域候选子网络,如图4所示,包含用于区分目标和背景的分类分支和用于边界框回归的回归分支。The region candidate sub-network, shown in Figure 4, contains a classification branch for distinguishing objects from background and a regression branch for bounding box regression.

进一步的,参见图3,上述实施例1的步骤步骤S300包括以下步骤:Further, referring to FIG. 3 , the step S300 of the above-mentioned Embodiment 1 includes the following steps:

步骤S310:将所述第i个模板特征图进行卷积操作以获取第一预设尺寸的模板样本集和第二预设尺寸的模板样本集。Step S310: Perform a convolution operation on the i-th template feature map to obtain a template sample set of a first preset size and a template sample set of a second preset size.

第i个区域候选子网络的第一卷积层将输入的第i个模板特征图进行卷积操作以获取第一预设尺寸的模板样本集和第二预设尺寸的模板样本集。The first convolution layer of the ith region candidate sub-network performs a convolution operation on the input ith template feature map to obtain a template sample set of a first preset size and a template sample set of a second preset size.

进一步的,在第i个区域候选子网络的分类分支上得到大小为4×4×(2k×256)的第一预设尺寸的模板样本集

Figure BDA0002578585200000111
表示大小为4×4的模板样本的特征在k种不同的锚框上有2k种变化。应当理解,2k种变化表示每一锚框中的图像可能存在两种情况,是背景或者是目标,即0或1两种状态。Further, a template sample set with a size of 4×4×(2k×256) of the first preset size is obtained on the classification branch of the ith region candidate sub-network
Figure BDA0002578585200000111
The features representing template samples of size 4×4 have 2k variations on k different anchor boxes. It should be understood that the 2k changes indicate that the image in each anchor box may have two situations, that is, the background or the target, that is, two states of 0 or 1.

进一步的,在第i个区域候选子网络的回归分支上得到大小为4×4×(4k×256)的第二预设尺寸的模板样本集

Figure BDA0002578585200000121
表示为4×4个像素点尺度的模板样本的特征在k种不同的锚框上有4k种变化。应当理解,4k种变化对应位置的宽、高、横坐标和纵坐标,每一锚框通过对应的宽、高、横坐标和纵坐标进行表示。Further, a template sample set of a second preset size of 4 × 4 × (4k × 256) is obtained on the regression branch of the ith region candidate sub-network
Figure BDA0002578585200000121
The features of the template samples represented as 4 × 4 pixel scales have 4k variations on k different anchor boxes. It should be understood that the 4k changes correspond to the width, height, abscissa, and ordinate of the position, and each anchor frame is represented by the corresponding width, height, abscissa, and ordinate.

步骤S320:将与所述第i个模板特征图尺度相同的检测特征图进行卷积操作以获取第三预设尺寸的检测样本集和第四预设尺寸的检测样本集。Step S320: Perform a convolution operation on the detection feature map with the same scale as the i-th template feature map to obtain a detection sample set of a third preset size and a detection sample set of a fourth preset size.

第i个区域候选子网络的第一卷积层将输入的与所述第i个模板特征图尺度相同的检测特征图进行卷积操作以获取第三预设尺寸的检测样本集和第四预设尺寸的检测样本集。The first convolution layer of the i-th region candidate sub-network performs a convolution operation on the input detection feature map with the same scale as the i-th template feature map to obtain a third preset size detection sample set and a fourth preset size. Set the size of the detection sample set.

进一步的,在第i个区域候选子网络的分类分支上得到大小为20×20×256的第三预设尺寸的检测样本集

Figure BDA0002578585200000122
在第i个区域候选子网络的回归分支上得到大小为20×20×256第四预设尺寸的检测样本集
Figure BDA0002578585200000123
Further, a detection sample set of a third preset size of 20×20×256 is obtained on the classification branch of the ith region candidate sub-network
Figure BDA0002578585200000122
A detection sample set with a size of 20×20×256 fourth preset size is obtained on the regression branch of the i-th region candidate sub-network
Figure BDA0002578585200000123

上述尺度中的256表示模板样本的通道数,通过特征金字塔的网络训练过程将特征的维度扩展至256维。256 in the above scale represents the number of channels of template samples, and the dimension of the feature is expanded to 256 dimensions through the network training process of the feature pyramid.

步骤S330:所述第i个区域候选子网络的分类分支根据所述第一预设尺寸的模板样本集和所述第三预设尺寸的检测样本集计算所述第三预设尺寸的检测样本集中各个备选图像的分值。Step S330: The classification branch of the ith region candidate sub-network calculates the detection sample of the third preset size according to the template sample set of the first preset size and the detection sample set of the third preset size Set the scores for each candidate image.

分类分支会给出每个输入的检测样本对应的分类分值,即被预测为目标或者背景的详细的得分情况,对应的分值可以表示为

Figure BDA0002578585200000124
★表示相关操作。The classification branch will give the classification score corresponding to each input detection sample, that is, the detailed score predicted as the target or background, and the corresponding score can be expressed as
Figure BDA0002578585200000124
★ indicates related operations.

步骤S340:所述第i个区域候选子网络的回归分支根据所述第二预设尺寸的模板样本集和所述第四预设尺寸的检测样本集确定所述第四预设尺寸的检测样本集中各个样本的位置。Step S340: The regression branch of the ith region candidate sub-network determines the detection sample of the fourth preset size according to the template sample set of the second preset size and the detection sample set of the fourth preset size The location of the individual samples in the set.

回归分支给出的则是每个检测样本的位置回归值

Figure BDA0002578585200000131
这个位置回归值包含横坐标、纵坐标、宽和高,分别对应dx,dy,dw和dh四个值,
Figure BDA0002578585200000132
★表示相关操作。The regression branch gives the position regression value of each detection sample
Figure BDA0002578585200000131
This position regression value includes abscissa, ordinate, width and height, corresponding to four values of d x , d y , d w and d h respectively,
Figure BDA0002578585200000132
★ indicates related operations.

步骤S350:根据所述第四预设尺寸的检测样本集中各个样本的位置确定所述各个备选图像对应的位置,所述第三预设尺寸的检测样本集和第四预设尺寸的检测样本集等同。Step S350: Determine the position corresponding to each candidate image according to the position of each sample in the detection sample set of the fourth preset size, the detection sample set of the third preset size and the detection sample of the fourth preset size set equivalent.

第三预设尺寸的检测样本集和第四预设尺寸的检测样本是尺寸相同的样本集,可以根据四预设尺寸的检测样本集中各个样本的位置确定所述各个备选图像对应的位置。The detection sample set of the third preset size and the detection sample of the fourth preset size are sample sets of the same size, and the position corresponding to each candidate image may be determined according to the position of each sample in the detection sample set of the four preset sizes.

前m个的备选的分类输出信息

Figure BDA0002578585200000133
和回归输出信息
Figure BDA0002578585200000134
可以得到得分最高的m个备选位置的位置信息
Figure BDA0002578585200000135
具体计算公式如下:The top m alternative classification output information
Figure BDA0002578585200000133
and regression output information
Figure BDA0002578585200000134
The position information of the m candidate positions with the highest score can be obtained
Figure BDA0002578585200000135
The specific calculation formula is as follows:

Figure BDA0002578585200000136
Figure BDA0002578585200000136

Figure BDA0002578585200000137
Figure BDA0002578585200000137

Figure BDA0002578585200000138
Figure BDA0002578585200000138

Figure BDA0002578585200000139
其中
Figure BDA00025785852000001310
为第i个备选位置对应的备选框的原始中心坐标和长宽,cls表示分类分支和reg表示回归分支,对于每个下标:i∈[0,w),j∈[0,h),l∈[0,2k),p∈[0,k),每一个A为一个输出信息的向量集合。
Figure BDA0002578585200000139
in
Figure BDA00025785852000001310
is the original center coordinate and length and width of the candidate box corresponding to the ith candidate position, cls represents the classification branch and reg represents the regression branch, for each subscript: i∈[0,w), j∈[0,h ), l∈[0,2k), p∈[0,k), each A is a vector set of output information.

实施例3Example 3

本实施例,参见图5,示出了目标追踪方法在上述步骤S100-S400之后,还包括以下步骤:This embodiment, referring to FIG. 5 , shows that after the above steps S100-S400, the target tracking method further includes the following steps:

步骤S500:确定所述前m个备选图像对应的目标位置的位置响应值。Step S500: Determine the position response value of the target position corresponding to the first m candidate images.

可以根据以下公式分别计算前m个备选图像对应的目标位置的位置响应值:The position response values of the target positions corresponding to the first m candidate images can be calculated respectively according to the following formula:

Figure BDA0002578585200000141
其中,
Figure BDA0002578585200000142
表示所述位置响应值,t*表示m个备选图像中任一个备选图像对应的目标位置,y(t*)表示该目标位置t*的响应结果,t表示距离该目标位置最近的干扰位置,y(t)表示干扰位置t的响应结果,Δ是二次连续可微函数,t与t*越接近,Δ(t-t*)趋近于0,t与t*距离越远,Δ(t-t*)趋近于1。
Figure BDA0002578585200000141
in,
Figure BDA0002578585200000142
represents the position response value, t * represents the target position corresponding to any candidate image among the m candidate images, y(t * ) represents the response result of the target position t * , and t represents the interference closest to the target position Position, y(t) represents the response result of the disturbance position t, Δ is a quadratic continuous differentiable function, the closer t is to t * , Δ(tt * ) approaches 0, and the farther the distance between t and t * is, Δ( tt * ) approaches 1.

应当理解,根据上述公式可以确定m个备选位置对应的m个位置响应值。It should be understood that m position response values corresponding to the m candidate positions can be determined according to the above formula.

步骤S600:当最大的位置响应值大于预设的响应阈值时,将所述最大的位置响应值对应的备选图像作为所述模板帧。Step S600: When the maximum position response value is greater than a preset response threshold, use the candidate image corresponding to the maximum position response value as the template frame.

从m个备选位置中选取对应的位置响应值大于预设的响应阈值的备选图像,作为新的模板帧,以继续执行上述步骤S100。A candidate image whose corresponding position response value is greater than a preset response threshold is selected from the m candidate positions as a new template frame, so as to continue to perform the above step S100.

上述实基于高分样本反馈的在线更新方式,将跟踪过程中的高得分备选样本作为新的模板帧,用于后续的检测任务。有效提高目标追踪的准确性和鲁棒性。The above is based on the online update method of high-scoring sample feedback, and the high-scoring candidate samples in the tracking process are used as new template frames for subsequent detection tasks. Effectively improve the accuracy and robustness of target tracking.

进一步的,利用训练样本集预先训练所述目标追踪方法对应的目标追踪模型,直到目标追踪模型的误差损失小于预设误差阈值;所述误差损失利用如下损失函数计算:Further, use the training sample set to pre-train the target tracking model corresponding to the target tracking method, until the error loss of the target tracking model is less than the preset error threshold; the error loss is calculated using the following loss function:

Figure BDA0002578585200000143
其中,
Figure BDA0002578585200000144
表示误差损失,βi表示第i个区域候选子网络的加权系数,μ表示衰减参数,
Figure BDA0002578585200000145
表示n个区域候选子网络的加权后的加权响应值,所述加权后的加权响应值计算公式如下:
Figure BDA0002578585200000143
in,
Figure BDA0002578585200000144
represents the error loss, β i represents the weighting coefficient of the ith region candidate sub-network, μ represents the attenuation parameter,
Figure BDA0002578585200000145
Represents the weighted weighted response value of the n regional candidate sub-networks, and the calculation formula of the weighted weighted response value is as follows:

Figure BDA0002578585200000151
其中,yβ(t*)表示所述目标位置t*的加权后的响应结果,t表示距离所述目标位置最近的干扰位置,yβ(t)表示干扰位置t的加权后响应结果,Δ是二次连续可微函数;所述加权后的响应结果计算公式如下:
Figure BDA0002578585200000151
Among them, y β (t * ) represents the weighted response result of the target position t * , t represents the interference position closest to the target position, y β (t) represents the weighted response result of the interference position t, Δ is a quadratic continuously differentiable function; the calculation formula of the weighted response result is as follows:

Figure BDA0002578585200000152
si(t)表示第i个区域候选子网络响应结果,其中,加权系数之和为1,即
Figure BDA0002578585200000153
Figure BDA0002578585200000152
s i (t) represents the response result of the ith region candidate sub-network, where the sum of the weighting coefficients is 1, that is,
Figure BDA0002578585200000153

利用上述损失函数计算目标追踪方法对应的目标追踪模型的误差损失,当目标追踪模型的误差损失小于预设的误差阈值时,表明目标追踪方法对应的目标追踪模型的追踪质量符合标准。The above loss function is used to calculate the error loss of the target tracking model corresponding to the target tracking method. When the error loss of the target tracking model is less than the preset error threshold, it indicates that the tracking quality of the target tracking model corresponding to the target tracking method meets the standard.

实施例4Example 4

本实施例,参见图6,示出了一种目标追踪装置1包括:模板特征图获取模块100、检测特征图获取模块200、备选图像确定模块300和跟踪目标确定模块400。This embodiment, referring to FIG. 6 , shows a target tracking apparatus 1 including: a template feature map acquisition module 100 , a detection feature map acquisition module 200 , a candidate image determination module 300 and a tracking target determination module 400 .

模板特征图获取模块100,用于利用特征金字塔网络从模板帧中提取n个不同尺度的模板特征图;检测特征图获取模块200,用于利用特征金字塔网络从检测帧中提取与n个所述模板特征图的尺度一一对应的n个不同尺度的检测特征图;备选图像确定模块300,用于利用第i个区域候选子网络根据第i个模板特征图和与所述第i个模板特征图尺度相同的检测特征图在所述检测特征图上确定预设备选数目个备选图像、以及各个备选图像对应的分值和位置;跟踪目标确定模块400,用于当n个区域候选子网络对n个所述模板特征图和对应的检测特征图处理完成后,根据各个备选图像的分值在所有备选图像中确定分值最高的前m个备选图像的作为跟踪目标,所述跟踪目标对应的位置作为目标位置。The template feature map acquisition module 100 is used to extract n template feature maps of different scales from the template frame by using the feature pyramid network; the detection feature map acquisition module 200 is used to extract from the detection frame by using the feature pyramid network. The scale of the template feature map corresponds to the detection feature maps of n different scales; the candidate image determination module 300 is used to utilize the ith region candidate sub-network according to the ith template feature map and the ith template The detection feature maps with the same feature map scale determine a preset number of candidate images on the detection feature map, as well as the scores and positions corresponding to each candidate image; the tracking target determination module 400 is used for when n regions After the candidate sub-network completes the processing of the n template feature maps and the corresponding detection feature maps, according to the scores of each candidate image, the top m candidate images with the highest scores are determined as the tracking targets among all the candidate images. , and the position corresponding to the tracking target is taken as the target position.

进一步的,参见图7,备选图像确定模块300包括:模板样本集获取单元310、检测样本集获取单元320、备选图像分值计算单元330、样本位置确定单元340和备选图像位置确定单元350。Further, referring to FIG. 7 , the candidate image determination module 300 includes: a template sample set acquisition unit 310, a detection sample set acquisition unit 320, an candidate image score calculation unit 330, a sample position determination unit 340, and a candidate image position determination unit 350.

模板样本集获取单元310,用于将所述第i个模板特征图进行卷积操作以获取第一预设尺寸的模板样本集和第二预设尺寸的模板样本集;检测样本集获取单元320,用于将与所述第i个模板特征图尺度相同的检测特征图进行卷积操作以获取第三预设尺寸的检测样本集和第四预设尺寸的检测样本集;备选图像分值计算单元330,用于所述第i个区域候选子网络的分类分支根据所述第一预设尺寸的模板样本集和所述第三预设尺寸的检测样本集计算所述第三预设尺寸的检测样本集中各个备选图像的分值;样本位置确定单元340,用于所述第i个区域候选子网络的回归分支根据所述第二预设尺寸的模板样本集和所述第四预设尺寸的检测样本集确定所述第四预设尺寸的检测样本集中各个样本的位置;备选图像位置确定单元350,用于根据所述第四预设尺寸的检测样本集中各个样本的位置确定所述各个备选图像对应的位置,所述第三预设尺寸的检测样本集和第四预设尺寸的检测样本集等同。The template sample set obtaining unit 310 is configured to perform a convolution operation on the i-th template feature map to obtain a template sample set of a first preset size and a template sample set of a second preset size; the detection sample set obtaining unit 320 , for performing a convolution operation on the detection feature map with the same scale as the i-th template feature map to obtain a detection sample set of the third preset size and a detection sample set of the fourth preset size; the candidate image score A calculation unit 330, used for the classification branch of the ith region candidate sub-network to calculate the third preset size according to the template sample set of the first preset size and the detection sample set of the third preset size The score of each candidate image in the detection sample set of Suppose the detection sample set of the size determines the position of each sample in the detection sample set of the fourth preset size; the candidate image position determination unit 350 is configured to determine the position of each sample in the detection sample set of the fourth preset size according to the position of each sample. For the positions corresponding to the candidate images, the detection sample set of the third preset size is equivalent to the detection sample set of the fourth preset size.

一种目标追踪装置1还包括:响应值计算模块500,用于确定所述前m个备选图像对应的目标位置的位置响应值;模板帧更新模块600,用于当最大的位置响应值大于预设的响应阈值时,将所述最大的位置响应值对应的备选图像作为所述模板帧。A target tracking device 1 further includes: a response value calculation module 500 for determining the position response values of the target positions corresponding to the first m candidate images; a template frame update module 600 for when the maximum position response value is greater than When the preset response threshold is used, the candidate image corresponding to the largest position response value is used as the template frame.

本实施例目标追踪装置1通过模板特征图获取模块100、检测特征图获取模块200、备选图像确定模块300和跟踪目标确定模块400的配合使用,用于执行上述实施例所述的目标追踪方法,上述实施例所涉及的实施方案以及有益效果在本实施例中同样适用,在此不再赘述。The target tracking apparatus 1 of this embodiment is used to execute the target tracking method described in the above embodiments through the cooperative use of the template feature map acquisition module 100 , the detection feature map acquisition module 200 , the candidate image determination module 300 and the tracking target determination module 400 . , the implementations and beneficial effects involved in the above embodiments are also applicable in this embodiment, and are not repeated here.

上述实施例涉及一种终端设备,包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器运行所述计算机程序以使所述终端设备能执行上述实施例所述的目标追踪方法。The foregoing embodiment relates to a terminal device, including a memory and a processor, where the memory is used to store a computer program, and the processor runs the computer program to enable the terminal device to execute the target tracking method described in the foregoing embodiment. .

上述实施例涉及一种可读存储介质,其存储有计算机程序,所述计算机程序在处理器上运行时执行上述实施例所述的目标追踪方法。The above embodiments relate to a readable storage medium, which stores a computer program, and when the computer program runs on a processor, executes the target tracking method described in the above embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和结构图显示了根据本发明的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,结构图和/或流程图中的每个方框、以及结构图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may also be implemented in other manners. The apparatus embodiments described above are only schematic, for example, the flowcharts and structural diagrams in the accompanying drawings show the possible implementation architectures and functions of the apparatuses, methods and computer program products according to various embodiments of the present invention and operation. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more functions for implementing the specified logical function(s) executable instructions. It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented using dedicated hardware-based systems that perform the specified functions or actions. be implemented, or may be implemented in a combination of special purpose hardware and computer instructions.

另外,在本发明各个实施例中的各功能模块或单元可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或更多个模块集成形成一个独立的部分。In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.

所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是智能手机、个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed by the present invention. should be included within the protection scope of the present invention.

Claims (10)

1. A method of object tracking, the method comprising:
extracting n template feature graphs with different scales from the template frame by using a feature pyramid network;
extracting n detection feature maps with different scales, which are in one-to-one correspondence with the scales of the n template feature maps, from a detection frame by using a feature pyramid network;
determining preset candidate number of candidate images and corresponding scores and positions of the candidate images on a detection feature map by using an ith area candidate sub-network according to the ith template feature map and the detection feature map with the same scale as the ith template feature map;
and when the n regional candidate sub-networks finish processing the n template feature maps and the corresponding detection feature maps, determining m front candidate images with the highest scores in all the candidate images as tracking targets according to the scores of all the candidate images, and taking the positions corresponding to the tracking targets as target positions.
2. The method according to claim 1, wherein the determining, by using the ith area candidate sub-network, a preset candidate number of candidate images and scores and positions corresponding to the candidate images on the detection feature map according to the ith template feature map and the detection feature map with the same scale as the ith template feature map comprises:
performing convolution operation on the ith template characteristic diagram to obtain a template sample set with a first preset size and a template sample set with a second preset size;
performing convolution operation on the detection characteristic graph with the same scale as the ith template characteristic graph to obtain a detection sample set with a third preset size and a detection sample set with a fourth preset size;
the classification branch of the ith regional candidate sub-network calculates the score of each candidate image in the detection sample set of the third preset size according to the template sample set of the first preset size and the detection sample set of the third preset size;
determining the position of each sample in the detection sample set with the fourth preset size according to the template sample set with the second preset size and the detection sample set with the fourth preset size by the regression branch of the ith area candidate sub-network;
and determining the position corresponding to each alternative image according to the position of each sample in the detection sample set with the fourth preset size, wherein the detection sample set with the third preset size is equal to the detection sample set with the fourth preset size.
3. The target tracking method of claim 1, further comprising:
determining position response values of target positions corresponding to the first m candidate images;
and when the maximum position response value is larger than a preset position response threshold value, taking the alternative image corresponding to the maximum position response value as the template frame.
4. The object tracking method of claim 3, wherein the position response value is calculated according to the formula:
Figure FDA0002578585190000021
wherein,
Figure FDA0002578585190000023
representing said position response value, t*Representing said target position, y (t)*) Representing said target position t*T represents the interference position closest to the target position, y (t) represents the response result of the interference position t, and Δ is a quadratic continuous differentiable function.
5. The target tracking method according to claim 1, wherein a target tracking model corresponding to the target tracking method is trained in advance by using a training sample set until an error loss of the target tracking model is smaller than a preset error threshold;
the error loss is calculated using the following loss function:
Figure FDA0002578585190000022
wherein,
Figure FDA0002578585190000024
represents the error loss, betaiRepresents the weighting coefficients of the i-th region candidate sub-network, mu represents the attenuation parameter,
Figure FDA0002578585190000025
weighted response values representing the n regional candidate subnets;
the weighted response value calculation formula is as follows:
Figure FDA0002578585190000031
wherein, yβ(t*) Representing said target position t*T represents the interference bit closest to the target positionPosition yβ(t) represents the weighted response result of the interference location t, Δ being a quadratic continuous differentiable function;
the weighted response result calculation formula is as follows:
Figure FDA0002578585190000032
si(t) represents the ith area candidate sub-network response result.
6. The method of any of claims 1-5, wherein the n different scales comprise at least one of a 32 x 32 pixel scale, a 64 x 64 pixel scale, a 128 x 128 pixel scale, and a 256 x 256 pixel scale.
7. An object tracking device, the device comprising:
the template characteristic image acquisition module is used for extracting n template characteristic images with different scales from the template frame by utilizing the characteristic pyramid network;
the detection characteristic graph acquisition module is used for extracting n detection characteristic graphs with different scales, which are in one-to-one correspondence with the scales of the n template characteristic graphs, from a detection frame by utilizing a characteristic pyramid network;
the candidate image determining module is used for determining a preset candidate number of candidate images and corresponding scores and positions of the candidate images on the detection feature map by utilizing an ith regional candidate sub-network according to the ith template feature map and the detection feature map with the same scale as the ith template feature map;
and the tracking target determining module is used for determining the first m candidate images with the highest scores in all the candidate images as the tracking targets according to the scores of all the candidate images after the n regional candidate sub-networks process the n template feature images and the corresponding detection feature images, and the positions corresponding to the tracking targets are used as target positions.
8. The target tracking device of claim 7, wherein the alternative image determination module comprises:
a template sample set obtaining unit, configured to perform convolution operation on the ith template feature map to obtain a template sample set of a first preset size and a template sample set of a second preset size;
a detection sample set obtaining unit, configured to perform convolution operation on the detection feature map with the same scale as the ith template feature map to obtain a detection sample set of a third preset size and a detection sample set of a fourth preset size;
a candidate image score calculation unit, configured to calculate, by the classification branch of the ith regional candidate subnetwork, a score of each candidate image in the detection sample set of the third preset size according to the template sample set of the first preset size and the detection sample set of the third preset size;
a sample position determining unit, configured to determine, by the regression branch of the i-th area candidate subnetwork, a position of each sample in the fourth preset-sized detection sample set according to the second preset-sized template sample set and the fourth preset-sized detection sample set;
and the candidate image position determining unit is used for determining the position corresponding to each candidate image according to the position of each sample in the detection sample set with the fourth preset size, wherein the detection sample set with the third preset size is equal to the detection sample set with the fourth preset size.
9. A terminal device, comprising a memory for storing a computer program and a processor for executing the computer program to enable the terminal device to perform the object tracking method of any one of claims 1 to 6.
10. A readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the object tracking method of any one of claims 1 to 6.
CN202010661194.5A 2020-07-10 2020-07-10 Target tracking method, device, terminal device and readable storage medium Active CN111815677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010661194.5A CN111815677B (en) 2020-07-10 2020-07-10 Target tracking method, device, terminal device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010661194.5A CN111815677B (en) 2020-07-10 2020-07-10 Target tracking method, device, terminal device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111815677A true CN111815677A (en) 2020-10-23
CN111815677B CN111815677B (en) 2024-11-26

Family

ID=72841718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010661194.5A Active CN111815677B (en) 2020-07-10 2020-07-10 Target tracking method, device, terminal device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111815677B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614157A (en) * 2020-12-17 2021-04-06 上海眼控科技股份有限公司 Video target tracking method, device, equipment and storage medium
CN116309710A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Object tracking method and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230359A (en) * 2017-11-12 2018-06-29 北京市商汤科技开发有限公司 Object detection method and device, training method, electronic equipment, program and medium
CN109344821A (en) * 2018-08-30 2019-02-15 西安电子科技大学 Small target detection method based on feature fusion and deep learning
CN110033473A (en) * 2019-04-15 2019-07-19 西安电子科技大学 Motion target tracking method based on template matching and depth sorting network
CN110544269A (en) * 2019-08-06 2019-12-06 西安电子科技大学 Siamese Network Infrared Target Tracking Method Based on Feature Pyramid
CN110796679A (en) * 2019-10-30 2020-02-14 电子科技大学 A target tracking method for aerial imagery
CN111179217A (en) * 2019-12-04 2020-05-19 天津大学 A multi-scale target detection method in remote sensing images based on attention mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230359A (en) * 2017-11-12 2018-06-29 北京市商汤科技开发有限公司 Object detection method and device, training method, electronic equipment, program and medium
CN109344821A (en) * 2018-08-30 2019-02-15 西安电子科技大学 Small target detection method based on feature fusion and deep learning
CN110033473A (en) * 2019-04-15 2019-07-19 西安电子科技大学 Motion target tracking method based on template matching and depth sorting network
CN110544269A (en) * 2019-08-06 2019-12-06 西安电子科技大学 Siamese Network Infrared Target Tracking Method Based on Feature Pyramid
CN110796679A (en) * 2019-10-30 2020-02-14 电子科技大学 A target tracking method for aerial imagery
CN111179217A (en) * 2019-12-04 2020-05-19 天津大学 A multi-scale target detection method in remote sensing images based on attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵小蕾等: "双重熵快速提取ROI图像优化分类方法", 计算机与现代化, no. 2, 15 February 2019 (2019-02-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614157A (en) * 2020-12-17 2021-04-06 上海眼控科技股份有限公司 Video target tracking method, device, equipment and storage medium
CN116309710A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Object tracking method and electronic device

Also Published As

Publication number Publication date
CN111815677B (en) 2024-11-26

Similar Documents

Publication Publication Date Title
CN109614985B (en) An Object Detection Method Based on Densely Connected Feature Pyramid Network
CN112801047B (en) Defect detection method and device, electronic equipment and readable storage medium
CN111815687A (en) Point cloud matching method, positioning method, device and storage medium
CN113140005A (en) Target object positioning method, device, equipment and storage medium
CN110334703B (en) A method for ship detection and recognition in day and night images
CN114927236A (en) A detection method and system for multiple target images
US11961249B2 (en) Generating stereo-based dense depth images
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search
US20250259068A1 (en) Training object discovery neural networks and feature representation neural networks using self-supervised learning
CN116721139A (en) Generate depth images of image data
Dong et al. Learning regional purity for instance segmentation on 3d point clouds
CN117576149A (en) A single target tracking method based on attention mechanism
CN117392111A (en) Network and method for detecting surface defects of strip steel camouflage
Wibowo et al. Collaborative learning based on convolutional features and correlation filter for visual tracking
CN116844185B (en) Multi-person gesture recognition method based on mass fraction
CN113129332A (en) Method and apparatus for performing target object tracking
CN111815677A (en) Target tracking method, device, terminal device and readable storage medium
CN117765363A (en) An image anomaly detection method and system based on lightweight memory library
CN115937205B (en) Method, apparatus, equipment and storage medium for generating images of surface defect ceramic tiles
CN117726790A (en) An image-based weak texture scene recognition system, method, device and medium
CN114943766B (en) Repositioning method, repositioning device, electronic equipment and computer readable storage medium
CN117095053A (en) Gesture recognition method, posture recognition model training method and related equipment
CN119169056B (en) Satellite video target tracking method, device and equipment based on refined positioning
CN111291611A (en) Pedestrian re-identification method and device based on Bayesian query expansion
CN115240077B (en) Anchor frame-independent corner point regression based object detection method and device for remote sensing images in any direction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: No. 19 Huamei Road, Tianhe District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Xinhua College

Address before: No. 19 Huamei Road, Tianhe District, Guangzhou City, Guangdong Province 510000

Applicant before: XINHUA COLLEGE OF SUN YAT-SEN University

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant