[go: up one dir, main page]

CN108629299B - Long-time multi-target tracking method and system combining face matching - Google Patents

Long-time multi-target tracking method and system combining face matching Download PDF

Info

Publication number
CN108629299B
CN108629299B CN201810371595.XA CN201810371595A CN108629299B CN 108629299 B CN108629299 B CN 108629299B CN 201810371595 A CN201810371595 A CN 201810371595A CN 108629299 B CN108629299 B CN 108629299B
Authority
CN
China
Prior art keywords
face
tracker
detected
matched
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810371595.XA
Other languages
Chinese (zh)
Other versions
CN108629299A (en
Inventor
谭卫军
姚琪
齐德龙
刘汝帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Magicision Intelligent Technology Co ltd
Original Assignee
Wuhan Magicision Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Magicision Intelligent Technology Co ltd filed Critical Wuhan Magicision Intelligent Technology Co ltd
Priority to CN201810371595.XA priority Critical patent/CN108629299B/en
Publication of CN108629299A publication Critical patent/CN108629299A/en
Application granted granted Critical
Publication of CN108629299B publication Critical patent/CN108629299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a long-time multi-target tracking method and system combined with face matching. Only if the number of trackers in the sleep state is too large, the oldest tracker is deleted. When a new face is detected, we first check if this face belongs to the tracker that is currently in the active state, and then to the tracker that is in the sleep state. If so, the tracker is re-enabled and set to an active state. Meanwhile, the invention adds face feature extraction, and the face feature extraction greatly improves the accuracy of judging whether two faces belong to the same person. With higher accuracy, the revival tracker becomes of high value, making long-term tracking possible.

Description

Long-time multi-target tracking method and system combining face matching
Technical Field
The invention relates to the technical field of face recognition, in particular to a long-time multi-target tracking method and system combining face matching.
Background
With the rapid development of artificial intelligence, the application scenarios surrounding face recognition technology are increasing continuously, such as face payment, and verification of identity and identity card in airports/high-speed rail stations. In the security system, the video structuralization processing taking the human face as the core can monitor people in dense public places such as banks, airports, markets and the like, and realizes automatic statistics of people flow, analysis based on human face attributes and automatic identification and tracking of specific people.
In actual monitoring, due to factors such as video quality, complex light environment and complex face angle, the face captured in the video has the conditions of blur, overlarge angle, too dark/too bright face and the like, and the accuracy of familiar analysis based on the face and the accuracy of identification and tracking for a specific person are directly influenced. On the other hand, if the real-time captured human face is directly subjected to attribute analysis, human face recognition and other processing, but tracking and face selection processing are not performed, a large number of human face pictures can be generated, people flow statistics cannot be performed, the computing capacity of the server is greatly increased, the processing efficiency is reduced, wrong results are very easily obtained, and the practicability of structured processing is reduced.
Disclosure of Invention
The invention provides a long-time multi-target tracking method and system combining face matching, aiming at the technical problems in the prior art.
The technical scheme for solving the technical problems is as follows:
on one hand, the invention provides a long-time multi-target tracking method combined with face matching, which comprises the following steps:
step 1, initializing a frame counter, acquiring a video frame, detecting a face, extracting and storing a feature vector, establishing a tracker for each detected face, and setting the state of the tracker to be an active state;
step 2, acquiring a next video frame, and adding 1 to a frame counter;
step 3, matching the existing tracker with the face in the video frame, if the existing tracker is matched with the face in the video frame, updating the corresponding tracker by using the face in the video frame, and simultaneously setting the state of the tracker which is not matched with the face to be in a sleep state;
step 4, judging whether the frame counter reaches a preset value, if so, resetting the frame counter and executing the step 5, otherwise, skipping to the step 2;
and 5, detecting the face of the current video frame again, extracting a characteristic vector, sequentially matching the detected face with the active state tracker and the sleep state tracker, updating the corresponding tracker by using the face if the active state tracker is matched, modifying the state of the tracker into the active state and updating the tracker by using the face if the sleep state tracker is matched, or establishing the tracker for the newly detected face and skipping to the step 2.
Further, the method also comprises establishing a buffer for each tracker when the tracker is created for the detected face, and initializing the survival time of the tracker, wherein the buffer is used for storing the quality of the face, the position information of the face or the face feature vector in each video frame tracked by the tracker.
Further, when matching the tracker with the human face, the method comprises the following steps:
matching of the active state tracker with the detected face: judging whether the detected face position is consistent with the face position tracked by the tracker, and if the detected face position is positioned in the tracker window and the coincidence area of the detected face and the face tracked by the tracker exceeds a preset threshold value, determining that the tracker is matched with the detected face;
if the active tracker is not matched, extracting the feature vector of the face picture, judging whether the distance between the detected face feature vector and the face feature vector tracked by the sleep state tracker is smaller than a preset threshold value or not, and if the distance is smaller than the preset threshold value, considering that the tracker is matched with the detected face.
Further, the method between step 3 and step 4 further includes determining whether the number of trackers reaches an upper limit value, and if the number of trackers reaches the upper limit value, deleting the trackers in the sleep state with the longest survival time.
If the number of the trackers does not reach the upper limit value, and the survival time of some trackers in the sleep state is too long, the specified time length is reached, and the deletion operation of the trackers is also carried out.
Further, for the survival time of the tracker:
at tracker creation time, the time to live is set to 0;
adding 1 to the survival time when processing one frame of video frame;
when the tracker matches a face, the survival time is reset to 0.
Further, after the tracker deletes, obtaining the face picture buffered in the buffer of the tracker, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
On the other hand, the invention also provides a long-time multi-target tracking system combined with face matching, which comprises:
the video frame acquisition module is used for acquiring video frames;
the frame counter is used for recording the number of the acquired video frames;
the face detection module is used for carrying out face detection on the acquired video frames and extracting face characteristic vectors;
the tracker creating module is used for creating a tracker for each human face detected by the human face detecting module;
and the tracker matching and updating module is used for matching the detected face with the tracker and updating the tracker by using the detected face according to a matching result.
Further, the system also comprises a buffer creating module for creating a buffer for each tracker when the tracker is created for the detected face, and initializing the survival time of the tracker, wherein the buffer is used for storing the quality of the face, the position information of the face or the face feature vector in each video frame tracked by the tracker.
Further, for the survival time of the tracker:
at tracker creation time, the time to live is set to 0;
adding 1 to the survival time when processing one frame of video frame;
when the tracker matches a face, the survival time is reset to 0.
The system further comprises a face screening module, a face image processing module and a face image processing module, wherein the face screening module is used for acquiring the face image buffered in the buffer of the tracker after the tracker is deleted, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
The invention has the beneficial effects that: in the video processing process, after the human face is detected, tracking processing is carried out, and one or more human faces with the best quality are selected in the tracking process to represent the human face of a certain person. Finally, only a small number of face pictures with the best quality are generated for each face in the video, and the processing difficulty is reduced for the structural processing of the video.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Face detection and tracking is to determine the position of a certain face in a video or image sequence and to continuously track the face during the motion of the face to ensure that the face is the same person. The performance of face detection and tracking is of great significance to face image analysis, face recognition and video structuring.
Common face tracking algorithms include:
1. and performing real-time face detection on each frame of image to obtain face position information, and judging whether the image is the same person or not according to the relationship between the position of the face in the previous frame and the current face position so as to obtain the motion track of the face.
2. And combining human face detection and object tracking algorithms. That is, face detection is performed every several frames to obtain face position information, and the following several frames of images are predicted by an object tracking algorithm to obtain the motion trail of the face. Face detection is done periodically and the result is continuously corrected for predictions of object tracking.
In the above method, the former would consume computing resources and the operation speed is slow; the latter can be satisfied for simple scenes, such as large face, small angle and no mutual occlusion due to the limitation of a tracking algorithm; however, in most cases, like people in shopping malls, retail stores, and on the ordinary street, the human face angle is large, the difference between the motion amplitude and the speed is large, the human faces are shielded from each other, the quality of the human faces is uneven, the situation that the tracking of the human face track fails, the adjacent human faces interfere with each other, and the situation of misinformation occurs easily.
In the method, a face matching function is introduced to judge whether a newly detected face appears in the past time; if so, no new tracker is generated. Therefore, the target can be effectively tracked within a long time (several minutes), the problem of shielding is solved, and the problem that the face of a person is repeatedly uploaded when the person reoccurs is solved. Meanwhile, the face matching feature vector output by face matching can be directly stored and used in the final face recognition function. The technology consumes more hardware resources, and is suitable for occasions with good hardware resource education (such as a PC) but higher requirements on tracking performance.
Specifically, on the one hand, the invention provides a long-time multi-target tracking method combined with face matching, as shown in fig. 1, comprising the following steps:
step 1, initializing a frame counter, acquiring a video frame, detecting a face, extracting and storing a feature vector, establishing a tracker for each detected face, and setting the state of the tracker to be an active state; MTCNN was used as a face detector for face detection.
The tracker we choose to be a compound tracker. The native tracker is encapsulated with advanced signal processing functions. The selectable core algorithm comprises the following steps:
1. kernel correlation filtering algorithm (KCF) algorithm
2. Median optical flow (median optical flow) algorithm
3. Tracking Learning Detection (Tracking Learning Detection) algorithm
When the tracker is created for the detected face, a buffer is established for each tracker, and the survival time of the tracker is initialized, wherein the buffer is used for storing the quality of the face, the position information of the face or the face feature vector in each video frame tracked by the tracker.
Survival time for the tracker:
at tracker creation time, the time to live is set to 0;
adding 1 to the survival time when processing one frame of video frame;
when the tracker matches a face, the survival time is reset to 0.
Step 2, acquiring a next video frame, and adding 1 to a frame counter;
step 3, matching the existing tracker with the face in the video frame, judging whether the detected face position is consistent with the face position tracked by the tracker, if the detected face position is positioned in the tracker window and the coincidence area of the detected face and the face tracked by the tracker exceeds a preset threshold value, considering that the tracker is matched with the detected face, then updating the corresponding tracker by using the face in the video frame, and simultaneously setting the state of the tracker which is not matched with the face to be a sleep state;
when the tracker state is modified from active to sleep, the survival time of the tracker is also considered, that is, when the active tracker is not matched with a human face and the survival time of the active tracker reaches a certain value, such as 2 seconds, the state of the active tracker is modified to sleep.
And 4, judging whether the number of the trackers reaches an upper limit value, deleting the trackers in the sleep state with the longest survival time if the number of the trackers reaches the upper limit value, deleting the trackers in the sleep state with the longest survival time if the number of the trackers does not reach the upper limit value and the survival time of some trackers in the sleep state is too long to reach a specified time, and deleting the trackers to reduce hardware resource consumption.
After the tracker is deleted, acquiring the face pictures buffered in the buffer of the tracker, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
Step 5, judging whether the frame counter reaches a preset value, generally 5 frames, or adjusting according to the practical application condition, if so, resetting the frame counter and executing the step 6, otherwise, skipping to the step 2;
step 6, carrying out face detection again on the current video frame, matching the detected face with the active state tracker and the sleep state tracker in sequence,
firstly, judging whether the position of a detected face is consistent with the position of a face tracked by a tracker in an active state, and if the position of the detected face is positioned in a tracker window and the coincidence area of the detected face and the face tracked by the tracker exceeds a preset threshold value, considering that the tracker is matched with the detected face;
if the active tracker is not matched, extracting the feature vector of the face picture, judging whether the distance between the detected face feature vector and the face feature vector tracked by the sleep state tracker is smaller than a preset threshold value or not, and if the distance is smaller than the preset threshold value, considering that the tracker is matched with the detected face.
If the tracker in the active state is matched, the corresponding tracker is updated by using the face, if the tracker in the sleep state is matched, the tracker state is modified into the active state and the tracker is updated by using the face, otherwise, the tracker is created for the newly detected face, and the step 2 is skipped.
On the other hand, the invention also provides a long-time multi-target tracking system combining face matching, as shown in fig. 2, comprising:
the video frame acquisition module is used for acquiring video frames;
the frame counter is used for recording the number of the acquired video frames;
the face detection module is used for carrying out face detection on the acquired video frames and extracting face characteristic vectors;
the tracker creating module is used for creating a tracker for each human face detected by the human face detecting module;
and the tracker matching and updating module is used for matching the detected face with the tracker and updating the tracker by using the detected face according to a matching result.
Further, the system also comprises a buffer creating module for creating a buffer for each tracker when the tracker is created for the detected face, and initializing the survival time of the tracker, wherein the buffer is used for storing the quality of the face, the position information of the face or the face feature vector in each video frame tracked by the tracker.
Further, for the survival time of the tracker:
at tracker creation time, the time to live is set to 0;
the survival time is increased by 1 for each processed video frame.
The system further comprises a face screening module, a face image processing module and a face image processing module, wherein the face screening module is used for acquiring the face image buffered in the buffer of the tracker after the tracker is deleted, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
The invention increases the sleeping state of the tracker, and the tracker with low quality enters the sleeping state instead of using a mechanism for deleting the tracker according to the quality. Only if the number of trackers in the sleep state is too large, the oldest tracker is deleted. When a new face is detected, we first check if this face belongs to the tracker that is currently active and then to the tracker that is asleep. If so, the tracker is re-enabled and set to the OR state. Meanwhile, the invention adds face feature extraction, and the face feature extraction greatly improves the accuracy of judging whether two faces belong to the same person. With higher accuracy, the revival tracker becomes of high value, making long-term tracking possible.
In the video processing process, after the human face is detected, the tracking processing is carried out, and one or more human faces with the best quality are selected in the tracking process to represent the human face of a certain person. Finally, only a small number of face pictures with the best quality are generated for each face in the video, and the processing difficulty is reduced for the structural processing of the video.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A long-time multi-target tracking method combined with face matching is characterized by comprising the following steps:
step 1, initializing a frame counter, acquiring a video frame, detecting a face, extracting and storing a feature vector, establishing a tracker for each detected face, and setting the state of the tracker to be an active state;
step 2, acquiring a next video frame, and adding 1 to a frame counter;
step 3, matching the existing tracker with the face in the video frame, if the existing tracker is matched with the face in the video frame, updating the corresponding tracker by using the face in the video frame, and simultaneously setting the state of the tracker which is not matched with the face to be in a sleep state;
step 4, judging whether the frame counter reaches a preset value, if so, resetting the frame counter and executing the step 5, otherwise, skipping to the step 2;
step 5, carrying out face detection again on the current video frame, matching the detected face with an active state tracker and a sleep state tracker in sequence, if the detected face is matched with the active state tracker, updating the corresponding tracker by using the face, if the detected face is matched with the sleep state tracker, modifying the state of the tracker into an active state and updating the tracker by using the face, otherwise, establishing a tracker for the newly detected face, and jumping to the step 2;
when the tracker is matched with the human face, the method comprises the following steps:
matching of the active state tracker with the detected face: judging whether the detected face position is consistent with the face position tracked by the tracker, and if the detected face position is positioned in the tracker window and the coincidence area of the detected face and the face tracked by the tracker exceeds a preset threshold value, determining that the tracker is matched with the detected face;
if the active tracker is not matched, extracting the feature vector of the face picture, judging whether the distance between the detected face feature vector and the face feature vector tracked by the sleep state tracker is smaller than a preset threshold value or not, and if the distance is smaller than the preset threshold value, considering that the tracker is matched with the detected face.
2. The long-time multi-target tracking method combined with face matching as claimed in claim 1, further comprising establishing a buffer for each tracker when the tracker is created for the detected face, and initializing the survival time of the tracker, wherein the buffer is used for storing the quality of the face, the position information of the face or the face feature vector in each video frame tracked by the tracker.
3. The long-time multi-target tracking method combined with face matching as claimed in claim 2, further comprising, between the step 3 and the step 4, judging whether the number of trackers reaches an upper limit value, and if so, deleting the tracker in the sleep state with the longest survival time.
4. The long-time multi-target tracking method combined with face matching according to claim 2 or 3, wherein for the survival time of the tracker:
at tracker creation time, the time to live is set to 0;
adding 1 to the survival time when processing one frame of video frame;
when the tracker matches a face, the survival time is reset to 0.
5. The long-time multi-target tracking method combined with face matching according to claim 3, characterized in that after the tracker is deleted, the face picture buffered in the buffer of the tracker is obtained, and one or more high-quality faces are screened according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
6. A long-time multi-target tracking system combined with face matching is characterized by comprising:
the video frame acquisition module is used for acquiring video frames;
the frame counter is used for recording the number of the acquired video frames;
the face detection module is used for carrying out face detection on the acquired video frames and extracting face characteristic vectors;
the tracker creating module is used for creating a tracker for each human face detected by the human face detecting module and setting the state of the tracker to be an active state;
the tracker matching and updating module is used for matching the existing tracker with the face in the video frame, updating the tracker by using the detected face according to the matching result, updating the corresponding tracker by using the face in the video frame if the matching result is matched, and setting the state of the tracker which is not matched with the face to be in a sleep state; the system comprises a frame counter, a tracker, a face detection module, a face recognition module and a tracking module, wherein the frame counter is used for resetting the frame counter and carrying out face detection on a current video frame when the frame counter reaches a preset value, matching the detected face with an active state tracker and a sleep state tracker in sequence, if the active state tracker is matched, updating the corresponding tracker by using the face, if the sleep state tracker is matched, modifying the state of the tracker into an active state and updating the tracker by using the face, and if the sleep state tracker is matched, establishing the tracker for the newly detected face;
when the tracker is matched with the human face, the method comprises the following steps:
matching of the active state tracker with the detected face: judging whether the detected face position is consistent with the face position tracked by the tracker, and if the detected face position is positioned in the tracker window and the coincidence area of the detected face and the face tracked by the tracker exceeds a preset threshold value, determining that the tracker is matched with the detected face;
if the active tracker is not matched, extracting the feature vector of the face picture, judging whether the distance between the detected face feature vector and the face feature vector tracked by the sleep state tracker is smaller than a preset threshold value or not, and if the distance is smaller than the preset threshold value, considering that the tracker is matched with the detected face.
7. The system according to claim 6, further comprising a buffer creation module, configured to create a buffer for each tracker when creating trackers for detected faces, and initialize survival time of the trackers, where the buffer is used to store quality of faces, position information of faces, or face feature vectors in each video frame tracked by the trackers.
8. The long-time multi-target tracking system combined with face matching according to claim 7, wherein for the survival time of the tracker:
at tracker creation time, the time to live is set to 0;
adding 1 to the survival time when processing one frame of video frame;
when the tracker matches a face, the survival time is reset to 0.
9. The long-time multi-target tracking system combining face matching according to claim 8, further comprising a face screening module, configured to obtain the face pictures buffered in the buffer of the tracker after the tracker is deleted, and screen one or more high-quality faces according to filtering conditions; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
CN201810371595.XA 2018-04-24 2018-04-24 Long-time multi-target tracking method and system combining face matching Active CN108629299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810371595.XA CN108629299B (en) 2018-04-24 2018-04-24 Long-time multi-target tracking method and system combining face matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810371595.XA CN108629299B (en) 2018-04-24 2018-04-24 Long-time multi-target tracking method and system combining face matching

Publications (2)

Publication Number Publication Date
CN108629299A CN108629299A (en) 2018-10-09
CN108629299B true CN108629299B (en) 2021-11-16

Family

ID=63694235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810371595.XA Active CN108629299B (en) 2018-04-24 2018-04-24 Long-time multi-target tracking method and system combining face matching

Country Status (1)

Country Link
CN (1) CN108629299B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558815A (en) * 2018-11-16 2019-04-02 恒安嘉新(北京)科技股份公司 A kind of detection of real time multi-human face and tracking
CN109886951A (en) * 2019-02-22 2019-06-14 北京旷视科技有限公司 Method for processing video frequency, device and electronic equipment
CN110046548A (en) * 2019-03-08 2019-07-23 深圳神目信息技术有限公司 Tracking, device, computer equipment and the readable storage medium storing program for executing of face
CN110503059B (en) * 2019-08-27 2020-12-01 国网电子商务有限公司 A face recognition method and system
CN111274886B (en) * 2020-01-13 2023-09-19 天地伟业技术有限公司 Deep learning-based pedestrian red light running illegal behavior analysis method and system
CN111881711B (en) * 2020-05-11 2021-03-16 中富通集团股份有限公司 Big data analysis-based signal amplitude selection system
CN112686175B (en) * 2020-12-31 2025-02-14 北京仡修技术有限公司 Face capture method, system and computer readable storage medium
CN113096160B (en) * 2021-06-09 2021-10-29 深圳市优必选科技股份有限公司 Multi-target tracking method, device, equipment and storage medium
CN113822250A (en) * 2021-11-23 2021-12-21 中船(浙江)海洋科技有限公司 Ship driving abnormal behavior detection method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1959701A (en) * 2005-11-03 2007-05-09 中国科学院自动化研究所 Method for tracking multiple human faces from video in real time
CN101325691A (en) * 2007-06-14 2008-12-17 清华大学 Tracking method and tracking device for fusing multiple observation models with different lifetimes
CN102496009A (en) * 2011-12-09 2012-06-13 北京汉邦高科数字技术股份有限公司 Multi-face Tracking Method in Smart Bank Video Surveillance
CN102622769A (en) * 2012-03-19 2012-08-01 厦门大学 Multi-target tracking method by taking depth as leading clue under dynamic scene
CN104361327A (en) * 2014-11-20 2015-02-18 苏州科达科技股份有限公司 Pedestrian detection method and system
CN104834946A (en) * 2015-04-09 2015-08-12 清华大学 Method and system for non-contact sleep monitoring
CN106599836A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Multi-face tracking method and tracking system
CN107492116A (en) * 2017-09-01 2017-12-19 深圳市唯特视科技有限公司 A kind of method that face tracking is carried out based on more display models
CN107590452A (en) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 A kind of personal identification method and device based on gait and face fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5830373B2 (en) * 2011-12-22 2015-12-09 オリンパス株式会社 Imaging device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1959701A (en) * 2005-11-03 2007-05-09 中国科学院自动化研究所 Method for tracking multiple human faces from video in real time
CN101325691A (en) * 2007-06-14 2008-12-17 清华大学 Tracking method and tracking device for fusing multiple observation models with different lifetimes
CN102496009A (en) * 2011-12-09 2012-06-13 北京汉邦高科数字技术股份有限公司 Multi-face Tracking Method in Smart Bank Video Surveillance
CN102622769A (en) * 2012-03-19 2012-08-01 厦门大学 Multi-target tracking method by taking depth as leading clue under dynamic scene
CN104361327A (en) * 2014-11-20 2015-02-18 苏州科达科技股份有限公司 Pedestrian detection method and system
CN104834946A (en) * 2015-04-09 2015-08-12 清华大学 Method and system for non-contact sleep monitoring
CN106599836A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Multi-face tracking method and tracking system
CN107492116A (en) * 2017-09-01 2017-12-19 深圳市唯特视科技有限公司 A kind of method that face tracking is carried out based on more display models
CN107590452A (en) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 A kind of personal identification method and device based on gait and face fusion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A distributed tracking algorithm for target interception in face-structured sensor networks;Efren L. Souza et al;《39th Annual IEEE Conference on Local Computer Networks》;20141231;第1卷;470-473 *
基于MS-KCF模型的图像序列中人脸快速稳定检测;叶远征等;《计算机应用》;20180413;第38卷(第8期);摘要,第1节,图1,图5,图7 *
基于多信息融合的人脸检测跟踪;赵磊;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20140615(第4期);I138-884 *
无线传感器网络中运动目标协同跟踪技术研究;熊静;《中国博士学位论文全文数据库 (信息科技辑)》;20170215(第2期);I140-67 *
观测最优分配的GM-PHD 多目标跟踪算法;章涛等;《信号处理》;20150113(第12期);1419-1426 *

Also Published As

Publication number Publication date
CN108629299A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN108629299B (en) Long-time multi-target tracking method and system combining face matching
CN108734107B (en) Multi-target tracking method and system based on human face
CN109872341B (en) A method and system for high-altitude parabolic detection based on computer vision
CN111539265B (en) Method for detecting abnormal behavior in elevator car
US20190304102A1 (en) Memory efficient blob based object classification in video analytics
CN111291633B (en) A real-time pedestrian re-identification method and device
US9064325B2 (en) Multi-mode video event indexing
US20190130165A1 (en) System and method for selecting a part of a video image for a face detection operation
CN107316035A (en) Object identifying method and device based on deep learning neutral net
US9953240B2 (en) Image processing system, image processing method, and recording medium for detecting a static object
CN111462155B (en) Motion detection method, device, computer equipment and storage medium
Ma et al. Detecting Motion Object By Spatio-Temporal Entropy.
US11871125B2 (en) Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor
US9053355B2 (en) System and method for face tracking
US20220122360A1 (en) Identification of suspicious individuals during night in public areas using a video brightening network system
US20160210759A1 (en) System and method of detecting moving objects
CN112184771A (en) Community personnel trajectory tracking method and device
Haque et al. Perception-inspired background subtraction
KR101492059B1 (en) Real Time Object Tracking Method and System using the Mean-shift Algorithm
Jenifa et al. Rapid background subtraction from video sequences
Apewokin et al. Embedded real-time surveillance using multimodal mean background modeling
Xu et al. Feature extraction algorithm of basketball trajectory based on the background difference method
CN107742115A (en) A method and system for detecting and tracking moving objects based on video surveillance
Mustafah et al. Real-time face detection and tracking for high resolution smart camera system
Zhang et al. Nonparametric on-line background generation for surveillance video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant