[go: up one dir, main page]

CN105488478B - Face recognition system and method - Google Patents

Face recognition system and method Download PDF

Info

Publication number
CN105488478B
CN105488478B CN201510872357.3A CN201510872357A CN105488478B CN 105488478 B CN105488478 B CN 105488478B CN 201510872357 A CN201510872357 A CN 201510872357A CN 105488478 B CN105488478 B CN 105488478B
Authority
CN
China
Prior art keywords
face
image
detected
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510872357.3A
Other languages
Chinese (zh)
Other versions
CN105488478A (en
Inventor
梁伯均
李庆林
张伟
黄展鹏
王晶
苏哲昆
许金涛
张帅
张广程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201510872357.3A priority Critical patent/CN105488478B/en
Publication of CN105488478A publication Critical patent/CN105488478A/en
Application granted granted Critical
Publication of CN105488478B publication Critical patent/CN105488478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure relates to a face recognition system and method, the system at least comprising a data input module, a face analysis module and a data output module. The data input module provides an image sequence to be detected for the face analysis module, the face analysis module performs face identification by means of a deep learning technology, similar faces are obtained through a KD tree, the analysis is performed by parallel retrieval, comparison and combination on the basis of a plurality of sub-databases, analysis results are combined, and the data output module superposes face detection frames, face related statistical information and an original video and outputs the superposed video to the front end for display. A method for system implementation is also provided in the present disclosure. The system disclosed by the invention has the characteristics of automation, rapidness and accuracy when the face is recognized.

Description

Face recognition system and method
Technical Field
The present disclosure relates to the field of video surveillance, and in particular, to a face recognition system and method.
Background
At present, in order to absorb and maintain more valuable and potential VIP (VIP) customers, more and more attention is paid to providing better-quality and targeted services for the VIP customers. The traditional industry can use forms such as VIP cards to distinguish customers, but the modes of carrying cards, reporting card numbers and the like are not humanized and convenient. The face recognition technology is a technology for carrying out identity recognition according to the characteristic information of the face of a person, and has the advantages of non-contact property, concurrency, non-compulsory property, intuition and the like.
Disclosure of Invention
In order to solve the partial problems, the present disclosure provides a face recognition system and a method, wherein the system performs face recognition by means of a deep learning technology, provides a set of complete video monitoring face recognition system, and can provide service support for bank VIP recognition, store welcome and monitoring scene face recognition; if the time, the place and the time length of the detected face in the system are utilized, passenger flow statistics can be further realized, and help is provided for improving service or inquiring special users. In addition, the disclosure also provides a method for implementing the system of the disclosure.
A face recognition system, the system comprising at least: the system comprises a data input module, a face analysis module and a data output module;
the data input module is used for transmitting the image sequence to be detected to the face analysis module;
the data output module comprises an image output unit and/or a message subscription unit;
the image output unit is used for identifying the face obtained in the face analysis module and superposing the information related to the face to the original video;
the message subscription unit is used for sending event messages to terminal subscription users;
the face analysis module is used for detecting, analyzing and identifying a face in an image sequence to be detected, and at least comprises the following units:
u100, face detection tracking unit: detecting a face in a tracking image for the received image, judging the quality, selecting a plurality of frames meeting the requirements as key frames, and transmitting the key frames to a face comparison unit;
u200, a face comparison unit: receiving the key frames, extracting the face features of each frame, searching and selecting a plurality of similar face features in a user information database for comparison and analysis;
wherein: the human face features are expressed by using multi-dimensional feature vectors;
the user information database allows a single person to have M face images stored using the same first identifier identification.
To facilitate implementation of the present system, in one embodiment, a method is provided for implementing the system of the present disclosure, namely: a face recognition method, the method comprising at least:
s100, face detection and tracking: receiving an image sequence to be detected, detecting a face in a tracking image for the image to be detected, judging the quality, and selecting a plurality of frames meeting requirements as key frames for comparison and analysis; the image sequence to be detected is a plurality of frame images within a certain time interval;
s200, comparing and analyzing the human face: extracting the face features of the key frame, searching and selecting a plurality of similar face features in a user information database for comparison analysis;
wherein: the human face features are expressed by using multi-dimensional feature vectors;
the user information database allows M face images of a single person to be stored in the data output module by using the same first identifier identification, and the data output module comprises an image output unit and/or a message subscription unit;
s300, outputting a result: identifying the face detected in the key frame by using an identification frame, and overlaying information related to the face onto an original video; and/or sending subscribed event messages to the end user.
Detailed Description
In a basic embodiment, there is provided a face recognition system, the system comprising at least: the system comprises a data input module, a face analysis module and a data output module;
the data input module is used for transmitting the image sequence to be detected to the face analysis module;
the data output module comprises an image output unit and/or a message subscription unit;
the image output unit is used for identifying the face obtained in the face analysis module and superposing the information related to the face to the original video;
the message subscription unit is used for sending event messages to terminal subscription users;
the face analysis module is used for detecting, analyzing and identifying a face in an image sequence to be detected, and at least comprises the following units:
u100, face detection tracking unit: detecting a face in a tracking image for the received image, judging the quality, selecting a plurality of frames meeting the requirements as key frames, and transmitting the key frames to a face comparison unit;
u200, a face comparison unit: receiving the key frames, extracting the face features of each frame, searching and selecting a plurality of similar face features in a user information database for comparison and analysis;
wherein: the human face features are expressed by using multi-dimensional feature vectors;
the user information database allows a single person to have M face images stored using the same first identifier identification.
In this embodiment, the output of the data input module is a face image, and the input of the data input module can be multiple network video sources, an image sequence, an offline video or a real-time video, as long as an image with a face can be obtained after processing. In the detection of the image, all the received images may be detected, or the image may be preferably detected. The image sequence to be detected can be a plurality of frames of images collected in a certain time interval, and can also be a plurality of frames of images selected artificially. In one embodiment, the system performs face detection every 6 frames. When the image is detected, the face position and face key point information in the image are extracted, wherein the face key point information can comprise position information of an eye corner, the tail end of an eyebrow, a mouth corner, a nose tip and the like. When the image sequence is a single frame, the image itself is a key frame; and when the image sequence is a plurality of frames, selecting N frames with good quality from the sequence as key frames. The quality can be judged by scoring the indexes and then selecting the first N frames with high scores as key frames. The indexes comprise face picture definition, size, real face, shielding, illumination and the like. The facial features are represented by multi-dimensional feature vectors, and in one embodiment, approximately 180-dimensional feature vectors are used to represent the facial features. And tracking the detected face in the subsequent frames. And during searching, taking N groups of human face features as a whole, searching similar human faces in a user information database, and selecting a plurality of human faces with highest scores as a return result. And the comparison and analysis results, namely the video assembly and distribution unit and the original image, can be sent to the image output unit through an http protocol or a message server. In one embodiment, when recognizing a face library face in a monitored area, the system immediately issues a notification or alarm, the message including information such as the identification result of a particular person of interest, such as a VIP, a suspicious person, etc., the location time, a face picture, etc., to the subscriber.
In one embodiment, a method for the quality determination in U100 is provided, comprising the steps of:
s1010, for each detected face image, firstly judging whether the distance between two eyes meets a set requirement, and if so, executing the step S1011; otherwise, abandoning the detected face image;
s1011, calculating whether the face confidence score of the detected face image meets the set requirement, and if so, executing step S1012; otherwise, abandoning the detected face image;
s1012, calculating whether the front face score meets the set requirement, and if so, judging that the frame can be used for recognizing the face; otherwise, discarding the detected face image.
In one embodiment, an implementation of specifically picking key frames is provided. In this embodiment, for a single face tracked snapshot, whether the frame is used for recognition is determined according to a interocular distance >25, a face confidence score >0.95, and a front face score. Further, in this embodiment, a method for selecting key frames by a program is provided, that is, for each image tracked as the same face, a key frame container with a capacity of 10 is maintained inside. At the beginning, if the frame is less than 10 frames, each frame is stored in a container; after 10 frames are full, the frame suitable for identification is replaced by the frame with the worst known quality, and the interval between the frame suitable for identification and the frame number stored last is larger than 10; and recording the number of frames which are tracked as the processed images of the same face, and finishing the tracking if the number of frames is more than 20.
In one embodiment, a method for detecting and tracking in the U100 is provided, which comprises the following steps:
s101, carrying out face detection once every a plurality of frames, and when a face is detected, marking a part including the face by using a marking frame for the face meeting the quality requirement;
s102, judging whether the marked face area is overlapped with the detected face area, and if the overlap ratio meets a preset threshold value, determining that the marked face area and the detected face area are the same face, and entering the step S103; otherwise, the currently marked face is considered as a new face, and the tracking is finished;
s103, carrying out face alignment on the marked face in the marking frame, detecting the position of a key point of the face, calculating a surrounding rectangle outside the key point of the face, and replacing the detected image in the marking frame which is regarded as the same face.
In this embodiment, the part including the human face is marked by using the marking frame, the marked part may be a head, and preferably, the marked part may further include a shoulder, and in the marking mode including the shoulder, the recognition rate may be improved. In either way, the calculation of the degree of coincidence can be measured by the confidence level, and when the calculated confidence level reaches a certain range, two objects can be regarded as the same object. And the range to be reached can be determined experimentally.
Preferably, multiple libraries and parallel search are used, i.e.: if the user information database in the U200 includes a plurality of sub-databases, the search is performed based on the plurality of sub-databases. And the comparison analysis is carried out on the basis of the retrieval result, and then the analysis results are combined. The method not only supports the import of a large number of face images into the user information database, but also does not increase the retrieval time. Each sub-database imports a certain amount of face images, and a plurality of face images of a single person are imported into the same database. In the searching process, in one embodiment, a mode of multi-thread parallel searching of each database is adopted, and then the results of a plurality of sub-databases are combined according to the results of comparative analysis.
In one embodiment, a method for acquiring face features in a face image in a warehouse is provided, that is: the extraction in the U200 uses a Deepid deep learning algorithm to extract the facial features. The face features acquired in this way can help to accurately recognize the face. In one embodiment, about 180-dimensional feature vectors are extracted by using the extraction method for extracting the human face features.
Based on the human face feature, using multi-dimensional feature vector to represent, in one embodiment, a method for reducing comparison times and accelerating comparison process when searching similar feature vectors is provided, that is: similar face features in the U200 are obtained through the following steps:
s2011, establishing a KD tree: during searching, a KD tree is established to search K neighbors, wherein K is larger than or equal to M;
s2012, traversing the KD tree: when traversing the KD tree, one dimension of the face features is selected for each layer to be compared so as to determine the next layer of retrieval branches, and finally a plurality of face features similar to the key frame are determined.
In order to reduce the comparison times, one-dimensional features are selected at each layer for comparison so as to determine the next layer of branches to be retrieved.
In one embodiment, in the step S102, when it is determined that the newly marked face and the detected face are the same face, the newly marked face image and the detected face are identified by using the same second identifier; and, the alignment analysis in U200 includes the following steps:
s201, calculating a quality score q of the M frames of images with the same second identifier according to the positive face and the definitioni,i∈[1,M];
S202, searching and comparing each frame of image in the M frames of images from the face library respectively to find out the most similar N users, wherein the corresponding similarity is Si,userj,i∈[1,M],j∈[1,N];
S203, retrieving and comparing the M frames of images to obtain K users in total, calculating the similarity score of each user in the K users,
Figure GDA0002266618130000051
s204, according to
Figure GDA0002266618130000052
And arranging the K users in a descending order, and selecting a plurality of most similar users.
In this comparison mode, if the user information database includes a plurality of sub-databases, there may be a plurality of ways to obtain the final recognition result. For example, after a plurality of sub-databases are searched in parallel, steps S202 to S204 are performed on each sub-database, and then the similarity of all the most similar users is ranked and the returned result is selected. For another example, each sub-database returns a plurality of face features with scores sorted in the sub-database, then the returned face features are sorted by using the similarity value, and the face images corresponding to the plurality of face features sorted at present are selected as the return result.
Optionally, after the comparison analysis, the face comparison unit further performs the following operations:
s2031, calculating the face attribute by using a deep learning method;
s2032, judging whether the detected face exists in the user information database; if the face attribute exists in the user information database, updating the face attribute result; otherwise, storing the recognition result and the face attribute calculation result together.
The face attributes comprise the gender and the age of the user, and the appearance attributes such as wearing glasses, a hat, a mask and the like. The system for adding the stored face attribute can increase the retrieval dimension when providing a retrieval function externally, can filter the similarity value, the appearance attribute and the place of the face to be detected and the face to be put in storage according to time, reduces the retrieval range, accelerates the retrieval speed and provides the retrieval accuracy.
Optionally, on the basis of storing the calculation result of the face attribute, a statistical time point and a statistical location may be attached to each result, that is: the face attribute calculation result further includes a time point and a place when the image is acquired. This provides data support for locating when a face has appeared in a region. In one embodiment, the system establishes a user information database for special personnel such as VIP or suspicious persons, and when the user inquires the personnel, the user can directly compare the personnel with the face characteristics of the face images stored in the database, so that the user can conveniently and quickly locate when a certain face appears in a certain area.
In one embodiment, the system has a people flow statistics function, namely: the face comparison unit further implements the following operations:
s2030, counting the number of the detected faces in a certain time, and the time and duration of the appearance of each face. In this embodiment, the system compares the currently detected face with the face that has been detected within a certain time, and if it is determined that the person is the same person, it is determined that the number of times the person appears is 1; therefore, the number of different faces appearing in a certain time interval can be obtained, and the number of times of appearance of the same face, the time and the duration of each appearance in a plurality of time intervals can also be obtained. To implement such statistical functions, the system may build a temporary user information database for detected faces, which may be built periodically over time. The system may also specifically build a guest feature library. When the system is used in a store, the above-mentioned passenger flow statistics can be performed for the store.
In one embodiment, when the source of the face image is a video, the video may be a multi-network video source, an offline video or a real-time video, and in order to obtain a video frame image, the data input module further includes a video decoding unit, and the data output module further includes a video assembling and distributing unit;
the video decoding unit is used for reading a real-time video stream or a local video file and analyzing a video frame image;
the video assembly distribution unit is used for recoding the original video frame image superposed with the face detection frame and the information related to the face into a video.
In this embodiment, when the data input module sends the image sequence to be detected to the face analysis module, the detection may be implemented in a queue manner.
In one embodiment, during video assembly, information such as a face detection frame and an identified user name obtained by a face analysis module is overlaid on an original video frame, and then the original video frame is recoded into a video according to an x264 format, and the video is broadcasted through live555 after the video is assembled so as to be played by a browser or other clients.
In one embodiment, the video decoding unit reads an rtsp video stream or a local video file, decodes the video frame image through an vlc video decoder, and transmits the image to the face analysis module by using a buffer queue.
Preferably, the system further includes a camera, and the data input module further includes a video configuration unit, and the video configuration unit is configured to configure monitoring parameters of a video channel scene. In one embodiment, the video channel address needs to be configured. In this way, the system can be applied to real-time monitoring and real-time recognition of the monitored human face. The system is additionally provided with an alarm function on the basis of the disclosure, and can provide a set of complete face recognition monitoring service for scenes with security requirements, for example, at a people flow entrance concerned by public security, the face of passing people is automatically captured, the appearance characteristics of the people are automatically recognized, the appearance characteristics are automatically and rapidly compared with suspicious people entering a warehouse, and if the suspicious people are found, an alarm prompt is given. For another example, the method is applied to security of entrance guard of a community.
In one embodiment, the system can display the video and the recognition result in real time; when the user needs the VIP customer identification for identifying the service, the VIP customer entering the store can be identified, and a prompt is given; the method helps the user to count the number of clients entering the store every day, and count the time and the times of each client entering the store; by storing the face pictures of the customers entering the store door, calculating and storing the sex and age of the customers, whether the customers wear glasses, hats, masks and other appearance attributes, and storing the time and duration of entering the store, the users can conveniently filter the query conditions of the customers; and displaying the customer information which enters the store recently, wherein the customer information comprises face pictures, visit time, places, visit times and the like.
In one embodiment, the system provides video monitoring service for a public security system, detects the entering human face in a monitoring area, compares the human face with suspicious people stored in a warehouse, and gives a prompt when people to be monitored are found. The prompt can be in the form of one or any combination of a plurality of ways: static text, pattern or dynamic text, dynamic pattern, sound.
To facilitate implementation of the present system, in one embodiment, a method is provided for implementing the system of the present disclosure, namely: a face recognition method, the method comprising at least:
s100, face detection and tracking: receiving an image sequence to be detected, detecting a face in a tracking image for the image to be detected, judging the quality, and selecting a plurality of frames meeting requirements as key frames for comparison and analysis; the image sequence to be detected is a plurality of frame images within a certain time interval;
s200, comparing and analyzing the human face: extracting the face features of the key frame, searching and selecting a plurality of similar face features in a user information database for comparison analysis;
wherein: the human face features are expressed by using multi-dimensional feature vectors;
the user information database allows M face images of a single person to be stored in the data output module by using the same first identifier identification, and the data output module comprises an image output unit and/or a message subscription unit;
s300, outputting a result: identifying the face detected in the key frame by using an identification frame, and overlaying information related to the face onto an original video; and/or sending subscribed event messages to the end user.
In this embodiment, the output of the data input module is a face image, and the input of the data input module can be multiple network video sources, an image sequence, an offline video or a real-time video, as long as an image with a face can be obtained after processing. In the detection of the image, all the received images may be detected, or the image may be preferably detected. In one embodiment, the system performs face detection every 6 frames. When the image is detected, the face position and face key point information in the image are extracted, wherein the face key point information can comprise position information of an eye corner, the tail end of an eyebrow, a mouth corner, a nose tip and the like. When the image sequence is a single frame, the image itself is a key frame; and when the image sequence is a plurality of frames, selecting N frames with good quality from the sequence as key frames. The quality can be judged by scoring the indexes and then selecting the first N frames with high scores as key frames. The indexes comprise face picture definition, size, real face, shielding, illumination and the like. The facial features are represented by multi-dimensional feature vectors, and in one embodiment, approximately 180-dimensional feature vectors are used to represent the facial features. And tracking the detected face in the subsequent frames. And during searching, taking N groups of human face features as a whole, searching similar human faces in a user information database, and selecting a plurality of human faces with highest scores as a return result. And the comparison and analysis results, namely the video assembly and distribution unit and the original image, can be sent to the image output unit through an http protocol or a message server. In one embodiment, when recognizing a face library face in a monitored area, the system immediately issues a notification or alarm, the message including information such as the identification result of a particular person of interest, such as a VIP, a suspicious person, etc., the location time, a face picture, etc., to the subscriber.
In one embodiment, a method for the quality determination in S100 is provided, which includes the following steps:
s1010, for each detected face image, firstly judging whether the distance between two eyes meets a set requirement, and if so, executing the step S1011; otherwise, abandoning the detected face image;
s1011, calculating whether the face confidence score of the detected face image meets the set requirement, and if so, executing step S1012; otherwise, abandoning the detected face image;
s1012, calculating whether the front face score meets the set requirement, and if so, judging that the frame can be used for recognizing the face; otherwise, discarding the detected face image.
In one embodiment, an implementation of specifically picking key frames is provided. In this embodiment, for a single face tracked snapshot, whether the frame is used for recognition is determined according to a interocular distance >25, a face confidence score >0.95, and a front face score. Further, in this embodiment, a method for selecting key frames by a program is provided, that is, for each image tracked as the same face, a key frame container with a capacity of 10 is maintained inside. At the beginning, if the frame is less than 10 frames, each frame is stored in a container; after 10 frames are full, the frame suitable for identification is replaced by the frame with the worst known quality, and the interval between the frame suitable for identification and the frame number stored last is larger than 10; and recording the number of frames which are tracked as the processed images of the same face, and finishing the tracking if the number of frames is more than 20.
In one embodiment, an implementation method for detecting tracking in S100 is provided, and the steps include:
s101, carrying out face detection once every a plurality of frames, and when a face is detected, marking a part including the face by using a marking frame for the face meeting the quality requirement;
s102, judging whether the area of the newly marked face is overlapped with the area of the detected face, and if the overlap ratio meets a preset threshold value, determining that the area of the newly marked face is the same as the area of the detected face, and entering the step S103; otherwise, the currently marked face is considered as a new face, and the tracking is finished;
s103, carrying out face alignment on the newly marked face in the mark frame, detecting the position of a key point of the face, calculating a surrounding rectangle outside the key point of the face, and replacing the detected image in the mark frame which is regarded as the same face.
In this embodiment, the part including the human face is marked by using the marking frame, the marked part may be a head, and preferably, the marked part may further include a shoulder, and in the marking mode including the shoulder, the recognition rate may be improved. In either way, the calculation of the degree of coincidence can be measured by the confidence level, and when the calculated confidence level reaches a certain range, two objects can be regarded as the same object. And the range to be reached can be determined experimentally.
Preferably, multiple libraries and parallel search are used, i.e.: the user information database in S200 includes a plurality of sub-databases, and the user information database in U200 includes a plurality of sub-databases, and the search is a parallel search based on the plurality of sub-databases. And the comparison analysis is carried out on the basis of the retrieval result, and then the analysis results are combined. The method not only supports the import of a large number of face images into the user information database, but also does not increase the retrieval time. Each sub-database imports a certain amount of face images, and a plurality of face images of a single person are imported into the same database. In the searching process, in one embodiment, a mode of multi-thread parallel searching of each database is adopted, and then the results of a plurality of sub-databases are combined according to the results of comparative analysis.
In one embodiment, a method for acquiring face features in a face image in a warehouse is provided, that is: the extraction in the S200 uses a Deepid deep learning algorithm to extract the face features. In one embodiment, about 180-dimensional feature vectors are extracted by using the extraction method for extracting the human face features.
Based on the human face feature, using multi-dimensional feature vector to represent, in one embodiment, a method for reducing comparison times and accelerating comparison process when searching similar feature vectors is provided, that is: the similar human face features in the step S200 are obtained through the following steps:
s2011, establishing a KD tree: in the process of searching similar face features, K neighbors are searched to establish a KD tree, wherein K is larger than or equal to M;
s2012, traversing the KD tree: when traversing the KD tree, one dimension of the face features is selected for each layer to be compared so as to determine the next layer of retrieval branches, and finally a plurality of face features similar to the key frame are determined.
In one embodiment, in the step S102, when it is determined that the newly marked face and the detected face are the same face, the newly marked face image and the detected face are identified by using the same second identifier; and, the alignment analysis in S200 includes the following steps:
s201, calculating a quality score q of the M frames of images identified by the same second identifier according to the positive face and the definitioni,i∈[1,M];
S202, searching and comparing each frame of image in the M frames of images from the face library respectively to find out the most similar N users, wherein the corresponding similarity is Si,userj,i∈[1,M],j∈[1,N];
S203, retrieving and comparing the M frames of images to obtain K users in total, calculating the similarity score of each user in the K users,
Figure GDA0002266618130000101
s204, according to
Figure GDA0002266618130000102
And arranging the K users in a descending order, and selecting a plurality of most similar users.
In this comparison mode, if the user information database includes a plurality of sub-databases, there may be a plurality of ways to obtain the final recognition result. For example, after a plurality of sub-databases are searched in parallel, steps S202 to S204 are performed on each sub-database, and then the similarity of all the most similar users is ranked and the returned result is selected. For another example, each sub-database returns a plurality of face features with scores sorted in the sub-database, then the returned face features are sorted by using the similarity value, and the face images corresponding to the plurality of face features sorted at present are selected as the return result.
Optionally, after the alignment analysis, the step S200 further implements the following operations:
s2031, calculating the face attribute by using a deep learning method;
s2032, judging whether the detected face exists in the user information database; if the face attribute exists in the user information database, updating the face attribute result; otherwise, storing the recognition result and the face attribute calculation result together.
The face attributes comprise the gender and the age of the user, and the appearance attributes such as wearing glasses, a hat, a mask and the like. The system for adding the stored face attribute can increase the retrieval dimension when providing a retrieval function externally, can filter the similarity value, the appearance attribute and the place of the face to be detected and the face to be put in storage according to time, reduces the retrieval range, accelerates the retrieval speed and provides the retrieval accuracy.
Optionally, on the basis of storing the calculation result of the face attribute, a statistical time point and a statistical location may be attached to each result, that is: the face attribute calculation result further includes a time point and a place when the face image is acquired for the first time. This provides data support for locating when a face has appeared in a region. In one embodiment, the system establishes a user information database for special personnel such as VIP or suspicious persons, and when the user inquires the personnel, the user can directly compare the personnel with the face characteristics of the face images stored in the database, so that the user can conveniently and quickly locate when a certain face appears in a certain area.
In one embodiment, the method includes a people flow statistics function, namely: before step S2031, the step S200 further implements the following operations:
s2030, counting the number of the detected faces in a certain time, and the time and duration of each face.
In an embodiment, when the source of the face image is a video, the video may be a multi-network video source, an offline video, or a real-time video, and in order to obtain a video frame image, before receiving an image sequence to be detected in step S100, step S100 further includes reading a real-time video stream or a local video file, and analyzing the video frame image sequence;
after the face detected in the key frame is identified using the identification frame and the information related to the face is overlaid on the original video in step S300, the step S300 further includes re-encoding the original video frame image sequence overlaid with the face detection frame and the information related to the face into a video.
In one embodiment, step S100 sequentially performs detection tracking on images of the image sequence to be detected in a queue manner.
In one embodiment, in order to obtain the real-time video stream conveniently, the video is collected by a camera, and then the method further includes configuring video channel scene monitoring parameters for the camera collecting the real-time video stream. In this way, the system can be applied to real-time monitoring and real-time recognition of the monitored human face. The system is additionally provided with an alarm function on the basis of the disclosure, and can provide a set of complete face recognition monitoring service for scenes with security requirements, for example, at a people flow entrance concerned by public security, the face of passing people is automatically captured, the appearance characteristics of the people are automatically recognized, the appearance characteristics are automatically and rapidly compared with suspicious people entering a warehouse, and if the suspicious people are found, an alarm prompt is given. For another example, the method is applied to security of entrance guard of a community.
In one embodiment, the method can realize real-time display of the video and the recognition result; when the user needs the VIP customer identification for identifying the service, the VIP customer entering the store can be identified, and a prompt is given; the method helps the user to count the number of clients entering the store every day, and count the time and the times of each client entering the store; by storing the face pictures of the customers entering the store door, calculating and storing the sex and age of the customers, whether the customers wear glasses, hats, masks and other appearance attributes, and storing the time and duration of entering the store, the users can conveniently filter the query conditions of the customers; and displaying the customer information which enters the store recently, wherein the customer information comprises face pictures, visit time, places, visit times and the like.
In one embodiment, the system implemented by the method provides video monitoring service for a public security system, detects the entering human face in a monitoring area, compares the human face with the suspicious people in a warehouse, and gives a prompt when the people to be monitored are found. The prompt can be in the form of one or any combination of a plurality of ways: static text, pattern or dynamic text, dynamic pattern, sound.
The face recognition technology disclosed by the invention can be used for rapidly recognizing the guest client through face recognition and associating the guest client with the related client information database. Frequent visiting and valuable potential customers can also serve as potential visitant customers key services and recommend targeted products through data statistics. In addition, an alarm function is added on the basis of the method, a set of complete face recognition monitoring service can be provided for the public security, the face of the passing personnel is automatically captured at the entrance and exit of the stream of people concerned by the public security, the appearance characteristics of the personnel are automatically recognized, the personnel are automatically and rapidly compared with the suspicious personnel in the warehouse, and if the suspicious personnel are found, an alarm prompt is given.
The present disclosure has been described in detail, and the principles and embodiments of the present disclosure have been explained herein by using specific examples, which are provided only for the purpose of helping understanding the method and the core concept of the present disclosure; meanwhile, for those skilled in the art, according to the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present description should not be construed as a limitation to the present disclosure.

Claims (24)

1. A face recognition system, characterized in that the system comprises at least:
the system comprises a data input module, a face analysis module and a data output module;
the data input module is used for transmitting the image sequence to be detected to the face analysis module;
the data output module comprises an image output unit and/or a message subscription unit;
the image output unit is used for identifying the face obtained in the face analysis module and superposing the information related to the face to the original video;
the message subscription unit is used for sending event messages to terminal subscription users;
the face analysis module is used for detecting, analyzing and identifying a face in an image sequence to be detected, and at least comprises the following units:
u100, face detection tracking unit: detecting a face in a tracking image for the received image, judging the quality, selecting a plurality of frames meeting the requirements as key frames, and transmitting the key frames to a face comparison unit;
u200, a face comparison unit: receiving the key frames, extracting the face features of each frame, searching and selecting a plurality of similar face features in a user information database for comparison and analysis;
wherein: the human face features are expressed by using multi-dimensional feature vectors;
the user information database allows a single person to have M face images stored by using the same first identifier identification;
the face detection tracking unit detecting a face in a tracking image includes: performing face detection once every a plurality of frames, and marking the part including the face by using a marking frame for the face meeting the quality requirement; and under the condition that the coincidence degree of the marked face area and the detected face area meets a preset threshold value, carrying out face tracking according to the image in the marking frame.
2. The system of claim 1, wherein the quality determination in U100 comprises the steps of:
s1010, for each detected face image, firstly judging whether the distance between two eyes meets a set requirement, and if so, executing the step S1011; otherwise, abandoning the detected face image;
s1011, calculating whether the face confidence score of the detected face image meets the set requirement, and if so, executing step S1012; otherwise, abandoning the detected face image;
s1012, calculating whether the front face score meets the set requirement, and if so, judging that the frame can be used for recognizing the face; otherwise, discarding the detected face image.
3. The system according to claim 1, wherein in the case that the coincidence degree of the marked face area and the detected face area satisfies a preset threshold, the face tracking is performed according to the image in the mark frame, comprising the following steps:
s102, judging whether the marked face area is overlapped with the detected face area, and if the overlap ratio meets a preset threshold value, determining that the marked face area and the detected face area are the same face, and entering the step S103; otherwise, the currently marked face is considered as a new face, and the tracking is finished;
s103, carrying out face alignment on the marked face in the marking frame, detecting the position of a key point of the face, calculating a surrounding rectangle outside the key point of the face, and replacing the detected image in the marking frame which is regarded as the same face.
4. The system of claim 1, wherein the user information database in the U200 comprises a plurality of sub-databases, and wherein the lookup is based on a parallel search of the plurality of sub-databases.
5. The system according to claim 1, wherein the extraction in U200 uses a deep dexpid learning algorithm to extract facial features.
6. The system according to claim 1, wherein the similar facial features in the U200 are obtained by the following steps:
s2011, establishing a KD tree: in the process of searching similar face features, K neighbors are searched to establish a KD tree, wherein K is larger than or equal to M;
s2012, traversing the KD tree: when traversing the KD tree, one dimension of the face features is selected for each layer to be compared so as to determine the next layer of retrieval branches, and finally a plurality of face features similar to the key frame are determined.
7. The system of claim 3, wherein:
in step S102, when the marked face and the detected face are the same face, the marked face image and the detected face are identified by using the same second identifier;
and, the alignment analysis in U200 includes the following steps:
s201, calculating a quality score q of the M frames of images with the same second identifier according to the positive face and the definitioni,i∈[1,M];
S202, searching and comparing each frame of image in the M frames of images from the face library respectively to find out the most similar N users, wherein the corresponding similarity is Si,userj,i∈[1,M],j∈[1,N];
S203, retrieving and comparing the M frames of images to obtain K users in total, calculating the similarity score of each user in the K users,
Figure FDA0002266618120000031
s204, according to
Figure FDA0002266618120000032
And arranging the K users in a descending order, and selecting a plurality of most similar users.
8. The system of claim 7, wherein after the alignment analysis, the face alignment unit further performs the following operations:
s2031, calculating the face attribute by using a deep learning method;
s2032, judging whether the detected face exists in the user information database; if the face attribute exists in the user information database, updating the face attribute result; otherwise, storing the recognition result and the face attribute calculation result together.
9. The system of claim 8, wherein the face attribute calculation result further comprises a time point and a location when the face image was first acquired.
10. The system according to claim 8, wherein before step S2031, the face matching unit further performs the following operations:
s2030, counting the number of the detected faces in a certain time, and the time and duration of each face.
11. The system of claim 1, wherein the data input module further comprises a video decoding unit, and the data output module further comprises a video assembly distribution unit;
the video decoding unit is used for reading a real-time video stream or a local video file and analyzing a video frame image sequence;
the video assembly distribution unit is used for recoding the original video frame image sequence superposed with the face detection frame and the information related to the face into a video.
12. The system of claim 11, wherein the system further comprises a camera, and wherein the data input module further comprises a video configuration unit configured to configure monitoring parameters for a video channel scene.
13. A face recognition method, characterized in that the method at least comprises:
s100, face detection and tracking: receiving an image sequence to be detected, detecting a face in a tracking image for the image to be detected, judging the quality, and selecting a plurality of frames meeting requirements as key frames for comparison and analysis; the image sequence to be detected is a plurality of frame images within a certain time interval;
s200, comparing and analyzing the human face: extracting the face features of the key frame, searching and selecting a plurality of similar face features in a user information database for comparison analysis;
wherein: the human face features are expressed by using multi-dimensional feature vectors;
the user information database allows M face images of a single person to be stored in the data output module by using the same first identifier identification, and the data output module comprises an image output unit and/or a message subscription unit;
s300, outputting a result: identifying the face detected in the key frame by using an identification frame, and overlaying information related to the face onto an original video; and/or sending an event message to the terminal subscription user;
the detecting a face in a tracking image comprises: performing face detection once every a plurality of frames, and marking the part including the face by using a marking frame for the face meeting the quality requirement; and under the condition that the coincidence degree of the marked face area and the detected face area meets a preset threshold value, carrying out face tracking according to the image in the marking frame.
14. The method of claim 13, wherein the quality determination in S100 comprises the steps of:
s1010, for each detected face image, firstly judging whether the distance between two eyes meets a set requirement, and if so, executing the step S1011; otherwise, abandoning the detected face image;
s1011, calculating whether the face confidence score of the detected face image meets the set requirement, and if so, executing step S1012; otherwise, abandoning the detected face image;
s1012, calculating whether the front face score meets the set requirement, and if so, judging that the frame can be used for recognizing the face; otherwise, discarding the detected face image.
15. The method according to claim 13, wherein in the case that the coincidence degree of the marked face area and the detected face area meets a preset threshold, performing face tracking according to the image in the mark frame comprises the following steps:
s102, judging whether the marked face area is overlapped with the detected face area, and if the overlap ratio meets a preset threshold value, determining that the marked face area and the detected face area are the same face, and entering the step S103; otherwise, the currently marked face is considered as a new face, and the tracking is finished;
s103, carrying out face alignment on the marked face in the marking frame, detecting the position of a key point of the face, calculating a surrounding rectangle outside the key point of the face, and replacing the detected image in the marking frame which is regarded as the same face.
16. The method of claim 13, wherein the user information database in S200 comprises a plurality of sub-databases, and the searching is performed based on parallel searching of the plurality of sub-databases.
17. The method according to claim 13, wherein the extracting in S200 uses a deep dexpid learning algorithm to extract the face features.
18. The method according to claim 13, wherein the similar human face features in S200 are obtained by:
s2011, establishing a KD tree: in the process of searching similar face features, K neighbors are searched to establish a KD tree, wherein K is larger than or equal to M;
s2012, traversing the KD tree: when traversing the KD tree, one dimension of the face features is selected for each layer to be compared so as to determine the next layer of retrieval branches, and finally a plurality of face features similar to the key frame are determined.
19. The method of claim 15, wherein:
in step S102, when it is determined that the newly marked face and the detected face are the same face, identifying the newly marked face image and the detected face by using the same second identifier;
and, the alignment analysis in S200 includes the following steps:
s201, calculating a quality score q of the M frames of images with the same second identifier according to the positive face and the definitioni,i∈[1,M];
S202, searching and comparing each frame of image in the M frames of images from the face library respectively to find out the most similar N users, wherein the corresponding similarity is Si,userj,i∈[1,M],j∈[1,N];
S203, retrieving and comparing the M frames of images to obtain K users in total, calculating the similarity score of each user in the K users,
Figure FDA0002266618120000051
s204, according to
Figure FDA0002266618120000052
And arranging the K users in a descending order, and selecting a plurality of most similar users.
20. The method of claim 19, wherein after the alignment analysis, the step S200 further implements the following operations:
s2031, calculating the face attribute by using a deep learning method;
s2032, judging whether the detected face exists in the user information database; if the face attribute exists in the user information database, updating the face attribute result; otherwise, storing the recognition result and the face attribute calculation result together.
21. The method of claim 20, wherein the face attribute calculation result further comprises a time point and a location when the face image was first acquired.
22. The method according to claim 20, wherein before step S2031, said step S200 further implements the following operations:
s2030, counting the number of the detected faces in a certain time, and the time and duration of each face.
23. The method of claim 13, wherein:
before the step S100 receives an image sequence to be detected, the step S100 further includes reading a real-time video stream or a local video file, and analyzing a video frame image sequence;
after the face detected in the key frame is identified using the identification frame and the information related to the face is overlaid on the original video in step S300, the step S300 further includes re-encoding the original video frame image sequence overlaid with the face detection frame and the information related to the face into a video.
24. The method of claim 23, wherein the real-time video stream is captured by a camera, the method further comprising configuring video channel scene monitoring parameters for the camera that captures the real-time video stream.
CN201510872357.3A 2015-12-02 2015-12-02 Face recognition system and method Active CN105488478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510872357.3A CN105488478B (en) 2015-12-02 2015-12-02 Face recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510872357.3A CN105488478B (en) 2015-12-02 2015-12-02 Face recognition system and method

Publications (2)

Publication Number Publication Date
CN105488478A CN105488478A (en) 2016-04-13
CN105488478B true CN105488478B (en) 2020-04-07

Family

ID=55675450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510872357.3A Active CN105488478B (en) 2015-12-02 2015-12-02 Face recognition system and method

Country Status (1)

Country Link
CN (1) CN105488478B (en)

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358141B (en) * 2016-05-10 2020-10-23 阿里巴巴集团控股有限公司 Data identification method and device
CN106327546B (en) * 2016-08-24 2020-12-08 北京旷视科技有限公司 Method and device for testing face detection algorithm
CN106845356B (en) * 2016-12-24 2018-06-05 深圳云天励飞技术有限公司 A kind of method of recognition of face, client, server and system
CN106878670B (en) * 2016-12-24 2018-04-20 深圳云天励飞技术有限公司 A kind of method for processing video frequency and device
CN106845355B (en) * 2016-12-24 2018-05-11 深圳云天励飞技术有限公司 A kind of method of recognition of face, server and system
CN106682650A (en) * 2017-01-26 2017-05-17 北京中科神探科技有限公司 Mobile terminal face recognition method and system based on technology of embedded deep learning
CN106919917A (en) * 2017-02-24 2017-07-04 北京中科神探科技有限公司 Face comparison method
CN106961595A (en) * 2017-03-21 2017-07-18 深圳市科漫达智能管理科技有限公司 A kind of video frequency monitoring method and video monitoring system based on augmented reality
CN110475503A (en) 2017-03-30 2019-11-19 富士胶片株式会社 The working method of medical image processing device and endoscopic system and medical image processing device
CN107133568B (en) * 2017-03-31 2019-11-05 浙江零跑科技有限公司 A kind of speed limit prompt and hypervelocity alarm method based on vehicle-mounted forward sight camera
CN107292240B (en) * 2017-05-24 2020-09-18 深圳市深网视界科技有限公司 Person finding method and system based on face and body recognition
CN109033924A (en) * 2017-06-08 2018-12-18 北京君正集成电路股份有限公司 The method and device of humanoid detection in a kind of video
CN109145679B (en) 2017-06-15 2020-05-12 杭州海康威视数字技术股份有限公司 Method, device and system for sending out early warning information
CN108875488B (en) * 2017-09-29 2021-08-06 北京旷视科技有限公司 Object tracking method, object tracking apparatus, and computer-readable storage medium
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN107784294B (en) * 2017-11-15 2021-06-11 武汉烽火众智数字技术有限责任公司 Face detection and tracking method based on deep learning
CN108038422B (en) * 2017-11-21 2021-12-21 平安科技(深圳)有限公司 Camera device, face recognition method and computer-readable storage medium
CN108229320B (en) * 2017-11-29 2020-05-22 北京市商汤科技开发有限公司 Frame selection method and device, electronic device, program and medium
CN108228742B (en) * 2017-12-15 2021-10-22 深圳市商汤科技有限公司 Face duplicate checking method and device, electronic equipment, medium and program
CN108124157B (en) * 2017-12-22 2020-08-07 北京旷视科技有限公司 Information interaction method, device and system
CN108241853A (en) * 2017-12-28 2018-07-03 深圳英飞拓科技股份有限公司 A kind of video frequency monitoring method, system and terminal device
CN110008793A (en) * 2018-01-05 2019-07-12 中国移动通信有限公司研究院 Face identification method, device and equipment
CN108345851A (en) * 2018-02-02 2018-07-31 成都睿码科技有限责任公司 A method of based on recognition of face analyzing personal hobby
CN108399247A (en) * 2018-03-01 2018-08-14 深圳羚羊极速科技有限公司 A kind of generation method of virtual identity mark
CN110298213B (en) * 2018-03-22 2021-07-30 赛灵思电子科技(北京)有限公司 Video analysis system and method
CN108647581A (en) * 2018-04-18 2018-10-12 深圳市商汤科技有限公司 Information processing method, device and storage medium
CN108875556B (en) * 2018-04-25 2021-04-23 北京旷视科技有限公司 Method, apparatus, system and computer storage medium for testimony of a witness verification
CN108446681B (en) * 2018-05-10 2020-12-15 深圳云天励飞技术有限公司 Pedestrian analysis method, device, terminal and storage medium
CN108805040A (en) * 2018-05-24 2018-11-13 复旦大学 It is a kind of that face recognition algorithms are blocked based on piecemeal
CN108805046B (en) * 2018-05-25 2022-11-04 京东方科技集团股份有限公司 Method, apparatus, device and storage medium for face matching
CN110580425A (en) * 2018-06-07 2019-12-17 北京华泰科捷信息技术股份有限公司 Human face tracking snapshot and attribute analysis acquisition device and method based on AI chip
CN109145707B (en) * 2018-06-20 2021-09-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
WO2020001175A1 (en) * 2018-06-26 2020-01-02 Wildfaces Technology Limited Method and apparatus for facilitating identification
CN109034036B (en) * 2018-07-19 2020-09-01 青岛伴星智能科技有限公司 Video analysis method, teaching quality assessment method and system and computer-readable storage medium
CN109344686A (en) * 2018-08-06 2019-02-15 广州开瑞信息科技有限公司 A kind of intelligent face recognition system
CN109190527A (en) * 2018-08-20 2019-01-11 合肥智圣新创信息技术有限公司 A kind of garden personnel track portrait system monitored based on block chain and screen
CN109344765A (en) * 2018-09-28 2019-02-15 广州云从人工智能技术有限公司 A kind of intelligent analysis method entering shop personnel analysis for chain shops
CN109584208B (en) * 2018-10-23 2021-02-02 西安交通大学 An inspection method for intelligent identification model of industrial structural defects
CN111126119B (en) * 2018-11-01 2024-08-20 百度在线网络技术(北京)有限公司 Face recognition-based store user behavior statistics method and device
CN111161206A (en) * 2018-11-07 2020-05-15 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera and monitoring system
CN109606376A (en) * 2018-11-22 2019-04-12 海南易乐物联科技有限公司 A kind of safe driving Activity recognition system based on vehicle intelligent terminal
CN109635693B (en) * 2018-12-03 2023-03-31 武汉烽火众智数字技术有限责任公司 Front face image detection method and device
CN109784231B (en) * 2018-12-28 2023-07-25 广东中安金狮科创有限公司 Security information management method, device and storage medium
CN109726680A (en) * 2018-12-28 2019-05-07 东方网力科技股份有限公司 Face recognition method, device, system and electronic equipment
CN109801394B (en) * 2018-12-29 2021-07-30 南京天溯自动化控制系统有限公司 Staff attendance checking method and device, electronic equipment and readable storage medium
CN109711369A (en) * 2018-12-29 2019-05-03 深圳英飞拓智能技术有限公司 Pedestrian count method, apparatus, system, computer equipment and storage medium
CN110009662B (en) * 2019-04-02 2021-09-17 北京迈格威科技有限公司 Face tracking method and device, electronic equipment and computer readable storage medium
CN110321857B (en) * 2019-07-08 2021-08-17 苏州万店掌网络科技有限公司 Accurate customer group analysis method based on edge computing technology
CN112579809B (en) * 2019-09-27 2024-10-01 深圳云天励飞技术有限公司 Data processing method and related device
CN112329635B (en) * 2020-11-06 2022-04-29 北京文安智能技术股份有限公司 Method and device for counting store passenger flow
CN112329665B (en) * 2020-11-10 2022-05-17 上海大学 A face capture system
CN112926542B (en) * 2021-04-09 2024-04-30 博众精工科技股份有限公司 Sex detection method and device, electronic equipment and storage medium
CN113886682B (en) * 2021-09-10 2024-09-27 平安科技(深圳)有限公司 Information pushing method, system and storage medium in shoulder and neck movement scene
US12293568B2 (en) * 2022-05-11 2025-05-06 Verizon Patent And Licensing Inc. System and method for facial recognition
CN115187915A (en) * 2022-09-07 2022-10-14 苏州万店掌网络科技有限公司 Passenger flow analysis method, device, equipment and medium
CN116071851A (en) * 2022-12-28 2023-05-05 广东飞企互联科技股份有限公司 An intelligent access control system based on the Internet of Things
CN116912925A (en) * 2023-09-14 2023-10-20 齐鲁空天信息研究院 Face recognition method, device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404094A (en) * 2008-11-28 2009-04-08 中国电信股份有限公司 Video monitoring and warning method and system
CN101502088A (en) * 2006-10-11 2009-08-05 思科技术公司 Interaction based on facial recognition of conference participants

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003187352A (en) * 2001-12-14 2003-07-04 Nippon Signal Co Ltd:The System for detecting specified person
CN1428718B (en) * 2002-06-21 2010-10-20 上海银晨智能识别科技有限公司 Airport outgoing passenger intelligent identity identification method and system
JP2007334623A (en) * 2006-06-15 2007-12-27 Toshiba Corp Face authentication device, face authentication method, and access control device
CN101404107A (en) * 2008-11-19 2009-04-08 公安部第三研究所 Internet bar monitoring and warning system based on human face recognition technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101502088A (en) * 2006-10-11 2009-08-05 思科技术公司 Interaction based on facial recognition of conference participants
CN101404094A (en) * 2008-11-28 2009-04-08 中国电信股份有限公司 Video monitoring and warning method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《人脸图像中的眼镜去除及区域复原》;郭沛;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150815;论文第1.3节 *
《低功耗嵌入式实时人脸识别系统》;骆超;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130715;论文第3.4节 *
《基于特征匹配的目标识别与定位方法研究》;李珍;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130415;论文第2.3节 *

Also Published As

Publication number Publication date
CN105488478A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
CN105488478B (en) Face recognition system and method
CN105574506B (en) Intelligent face pursuit system and method based on deep learning and large-scale clustering
US10779037B2 (en) Method and system for identifying relevant media content
CN109934176B (en) Pedestrian recognition system, recognition method, and computer-readable storage medium
Kumar et al. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices
CN205451095U (en) A face -identifying device
CN109271554B (en) Intelligent video identification system and application thereof
CN107480236B (en) An information query method, device, equipment and medium
US20130148898A1 (en) Clustering objects detected in video
CN107169106B (en) Video retrieval method, device, storage medium and processor
CN110532432A (en) Character track retrieval method and system and computer readable storage medium
KR101089287B1 (en) Automatic Face Recognition Apparatus and Method Based on Multi-face Feature Information Fusion
CN106355154B (en) Method for detecting frequent passing of people in surveillance video
CN109902681B (en) User group relation determining method, device, equipment and storage medium
CN109492604A (en) Faceform's characteristic statistics analysis system
CN104239309A (en) Video analysis retrieval service side, system and method
CN110442742A9 (en) Method and device for retrieving image, processor, electronic equipment and storage medium
CN107247919A (en) The acquisition methods and system of a kind of video feeling content
WO2021259033A1 (en) Facial recognition method, electronic device, and storage medium
CN113630721A (en) Method and device for generating recommended tour route and computer readable storage medium
CN112036262A (en) A face recognition processing method and device
US11256945B2 (en) Automatic extraction of attributes of an object within a set of digital images
CN110659615A (en) Passenger group flow and structural analysis system and method based on face recognition
CN118115216A (en) Intelligent advertisement putting device based on scene recognition
CN115115976B (en) Video processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant