[go: up one dir, main page]

CN106485232B - Personnel identification method based on nose image features in breathing process - Google Patents

Personnel identification method based on nose image features in breathing process Download PDF

Info

Publication number
CN106485232B
CN106485232B CN201610919594.5A CN201610919594A CN106485232B CN 106485232 B CN106485232 B CN 106485232B CN 201610919594 A CN201610919594 A CN 201610919594A CN 106485232 B CN106485232 B CN 106485232B
Authority
CN
China
Prior art keywords
breathing
nose
respiratory
person
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610919594.5A
Other languages
Chinese (zh)
Other versions
CN106485232A (en
Inventor
肖书明
程燕
陈骐
刘泳庆
胡齐
王建明
梁智敏
沈唯真
初宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA INSTITUTE OF SPORT SCIENCE
Original Assignee
CHINA INSTITUTE OF SPORT SCIENCE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA INSTITUTE OF SPORT SCIENCE filed Critical CHINA INSTITUTE OF SPORT SCIENCE
Priority to CN201610919594.5A priority Critical patent/CN106485232B/en
Publication of CN106485232A publication Critical patent/CN106485232A/en
Application granted granted Critical
Publication of CN106485232B publication Critical patent/CN106485232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a person identification method based on nose image characteristics in a breathing process, which comprises the steps of collecting the nose breathing characteristics in real time through an infrared thermal imager, comparing the nose breathing characteristics collected in real time with the nose breathing characteristics collected in advance in a database, and when a comparison result reaches a set similarity threshold value, considering the nose breathing characteristics as the same person, wherein the comparison of the nose breathing characteristics is the comparison of the nose static breathing parameter characteristics and the nose dynamic breathing parameter characteristics of a plurality of continuous breathing cycles. The respiratory characteristic parameters based on the form and position clustering reflect the essential characteristics of one person's breath, and the characteristics are independent of a specific respiratory mode and do not change along with the external environment and the temporary physiological reaction of the person, so that the identification method can identify the person breathing in any mode. The invention can identify the person breathing in any mode, and is a supplement to the current face identification.

Description

Personnel identification method based on nose image features in breathing process
Technical Field
The invention relates to face recognition, in particular to a person recognition method based on nose image features in a breathing process.
Background
Automatic identification of persons is typically based on their physiological and behavioral characteristics, such as the voice or speech patterns, face, iris, retina, fingerprint, palm, etc. characteristics of a particular person. When selecting an automatic identification system, the following factors are generally considered: ability to resist fraud, ease of use, interference with identified persons, suitability for particular groups of people, speed of identification, size of model (feature information), long-term stability of biometric features, cost of use, and the like. None of the recognition methods can win all the above aspects, so that a more appropriate recognition method needs to be selected according to the specific requirements of a certain application. For example, the equipment for iris recognition and retina recognition is very expensive, and the interference to the person to be recognized is also large. However, iris recognition and retina recognition are the most accurate two recognition methods, and thus they are still the first choice in many situations where security is highly important. Such applications include nuclear power stations and important military bases.
In addition to the above-mentioned applications where security is a concern, there are a number of applications where relatively low recognition accuracy is required, but where little interference is required from the user. In this respect, the recognition methods such as iris, retina, fingerprint, and palm are all contact-type, and many combinations of the recognized persons are required, and therefore, they are not suitable for these cases. The following examples can be considered: in an intelligent remote video conference, a speech record needs to be automatically generated according to the identity of a speaker, so that the identity of the speaker needs to be automatically acquired, but the speaker cannot be disturbed to acquire identity information; in intelligent entrance hall occasion, need discern specific personnel's identity, and then decide whether to open the gate, and need not the pedestrian and stop the step (recognition methods such as brushing the fingerprint can bring extra burden for the user, because the user's both hands probably are all occupied). Contact identification presents hygiene problems and a certain number of people are reluctant to accept it because of the link between fingerprints/palm and crimes. In contrast to the above-mentioned identification, one identification technology that is promising to meet the above-mentioned needs is face recognition. The advantages of face recognition are visual, ubiquitous, easy to extract and the like. However, face recognition has the following problems: 1. the recognition effect is greatly influenced by lighting conditions, and when proper lighting conditions are not available (for example, when the size of the equipment is limited and the power supply capacity is limited), reliable face recognition cannot be implemented; 2. the difference of the facial expressions can influence the acquisition and extraction of data; 3. the problem of ornament shielding and damage: if the identified object carries ornaments such as sunglasses and a hat, or the face of a person is damaged or polluted, the signal data is lost; 4. the problem of twins; 5. the maturation and use of facelift and face-lift techniques introduces greater variability into the face that is almost impossible to solve with current identification systems.
Disclosure of Invention
The invention aims to provide a person identification method and a person identification system based on nose image characteristics in a breathing process.
In order to achieve the purpose, the technical scheme of the invention is as follows: a person identification method based on nose image features in a respiration process comprises the steps of collecting the nose respiration features in real time through an infrared thermal imager, comparing the nose respiration features collected in real time with the nose respiration features collected in advance in a database, and considering the nose respiration features and the database as the same person when a comparison result reaches a set similarity threshold, wherein the comparison of the nose respiration features is the comparison of the nose static respiration parameter features and the nose dynamic respiration parameter features of a plurality of continuous respiration cycles.
The scheme is further as follows:
the nasal static breathing parameters include: the temperature of the left alar nose, the temperature of the right alar nose, the temperature of the left nasolabial sulcus, the temperature of the right nasolabial sulcus, the temperature of the columella, the temperature of the tip of the nose, and the temperature of the lower half of the bridge of the nose;
the nasal dynamic respiratory parameters include: the method comprises the following steps of determining the frequency of the lowest or highest temperature of the left nostril, the lowest temperature of the left nostril, the highest temperature of the left nostril, the frequency of the lowest or highest temperature of the right nostril, the lowest temperature of the right nostril, the highest temperature of the right nostril, the maximum low temperature area of the left nostril, the maximum low temperature area of the right nostril, the ratio of the width of the maximum low temperature area of the left nostril to the width of the columella, the ratio of the width of the maximum low temperature area of the right nostril to the width of the columella, and the time difference of the highest or lowest temperature of the left and right nostrils in a plurality of.
The scheme is further as follows: the nose static breathing parameter feature is that difference calculation is carried out on the nose static breathing parameters of a plurality of breathing cycles and the current environment temperature, and the average value of the difference is used as the nose static breathing parameter feature;
the nose dynamic respiration parameter feature is that the nose dynamic respiration parameters of a plurality of respiration cycles are subjected to cluster analysis, and dynamic respiration form and position clusters generated after the cluster analysis are used as the nose dynamic respiration parameter feature.
The scheme is further as follows: the consecutive plurality of respiratory cycles is at least 10.
The scheme is further as follows: the dynamic respiration configuration clustering comprises main configuration clustering and sub-configuration clustering, wherein:
the main configuration clustering is a plurality of primary configuration clustering obtained by clustering and analyzing vectors formed by the nasal dynamic breathing parameters of the plurality of breathing cycles;
and the sub-configuration clustering is a plurality of secondary configuration clustering obtained by carrying out clustering analysis on the plurality of primary configuration clustering again.
The scheme is further as follows: the similarity threshold is at least 70%.
The scheme is further as follows: the clustering analysis is performed by adopting a K-means clustering method or a fuzzy C-means clustering method.
The invention has the following advantages: because the respiratory characteristic parameters based on the form and position clustering reflect the essential characteristics of the respiration of a person, the characteristics are independent of a specific respiratory mode and do not change along with the external environment and the temporary physiological response of the person, the identification method has strong robustness, and the person who breathes in any mode can be identified. In addition, the respiratory state parameters of the person are obtained based on the thermal infrared image of the nose part, and the thermal infrared image is collected without the direct contact between the person and the equipment and can be carried out at a relatively long distance, so that the recognition process has small interference on the recognized person. The ability to identify people breathing in any pattern is a supplement to current face recognition.
Detailed Description
A person identification method based on nose image features in a respiration process comprises the steps of collecting the nose respiration features in real time through an infrared thermal imager, comparing the nose respiration features collected in real time with the nose respiration features collected in advance in a database, and considering the nose respiration features and the database as the same person when a comparison result reaches a set similarity threshold, wherein the comparison of the nose respiration features is the comparison of the nose static respiration parameter features and the nose dynamic respiration parameter features of a plurality of continuous respiration cycles.
In the examples: the nasal static breathing parameters include: the temperature of the left alar nose, the temperature of the right alar nose, the temperature of the left nasolabial sulcus, the temperature of the right nasolabial sulcus, the temperature of the columella, the temperature of the tip of the nose, and the temperature of the lower half of the bridge of the nose;
the nasal dynamic breathing parameters include: the method comprises the following steps of determining the frequency of the lowest or highest temperature of the left nostril, the lowest temperature of the left nostril, the highest temperature of the left nostril, the frequency of the lowest or highest temperature of the right nostril, the lowest temperature of the right nostril, the highest temperature of the right nostril, the maximum low temperature area of the left nostril, the maximum low temperature area of the right nostril, the ratio of the width of the maximum low temperature area of the left nostril to the width of the columella, the ratio of the width of the maximum low temperature area of the right nostril to the width of the columella, and the time difference of the highest or lowest temperature of the left and right nostrils in a plurality of.
Wherein: the nose static breathing parameter feature is that difference calculation is carried out on the nose static breathing parameters of a plurality of breathing cycles and the current environment temperature, and the average value of the difference is used as the nose static breathing parameter feature;
the nose dynamic respiration parameter feature is that the nose dynamic respiration parameters of a plurality of respiration cycles are subjected to cluster analysis, and the dynamic respiration form and position (shape and position) generated after the cluster analysis is clustered as the nose dynamic respiration parameter feature.
Wherein: the consecutive plurality of respiratory cycles is at least 10.
In the examples: the dynamic respiration configuration clustering comprises main configuration clustering and sub-configuration clustering, wherein:
the main configuration clustering is a plurality of primary configuration clustering obtained by clustering and analyzing vectors formed by the nasal dynamic breathing parameters of the plurality of breathing cycles;
and the sub-configuration clustering is a plurality of secondary configuration clustering obtained by carrying out clustering analysis on the plurality of primary configuration clustering again.
Wherein: the similarity threshold is at least 70%, namely: at least 70% identical, or with an error less than a set threshold, very close. The clustering analysis is performed by adopting a K-means clustering method or a fuzzy C-means clustering method.
The embodiment provides main shape and position clustering and sub shape and position clustering, and specifically comprises the following steps: the dominant morpheme clustering is: according to certain criteria, several sets of all dynamic breathing parameters (vectors) are formed that are not intersected with each other, the sets completely cover all dynamic breathing parameter vectors, and the description and identification of the periodic breathing process are provided based on a generalized breathing model.
Note that: first clustering analysis is performed on a set of dynamic breathing parameter (vector) sets belonging to a number of consecutive breathing cycles of a person. According to certain criteria, all dynamic breathing parameters are divided into a plurality of specific states (or "phases") in a generalized (or "normalized") breathing cycle, wherein each state (distribution) contains a plurality of dynamic breathing parameters (vectors). In addition, vectors belonging to different states (phases) are non-intersecting. The master site provides a description and identification of the periodically varying course of respiration. This description and recognition is common to different people, i.e. each person's breath follows this model.
The sub-morpheme clustering is: a second clustering analysis is performed for all the primary respiratory configurations resulting from the first clustering analysis. For all dynamic respiratory parameters (vectors) contained in each main form, according to a specific criterion, they are divided into several classes which are not intersected with each other, and these classes completely cover all dynamic parameter vectors in the respiratory form, and provide description and identification of the numerical statistical characteristics of the respiratory form.
Note that: and performing second clustering analysis on the basis of the plurality of main configuration positions obtained by the first clustering analysis. For all the dynamic breathing parameters (vectors) contained in each cardinal shape position, the dynamic breathing parameters (vectors) are divided into a plurality of classes which are not intersected with each other according to a specific criterion based on the numerical values of the dynamic parameter vectors, and each class contains a plurality of dynamic breathing parameters (vectors) which have the same or similar numerical values. All classes derived from one breathing position will completely cover all dynamic parameter vectors in that breathing position, and vectors belonging to different classes are mutually disjoint. The sub-sites provide a refined description and identification of the statistical characteristics of the individual breathing sites. The distribution of values of the sub-breathing patterns is different for different persons, even for the same main breathing pattern. In addition, for the same person, the numerical distribution characteristics of the main respiration form and the sub-respiration form have long-term stability, and can be used for personnel identification.
The present embodiments relate to a method of identifying a person based on thermal infrared image features of the nose during respiration. The basic principle of identification in the embodiment is: the distribution of the temperature of a person's nose over time and space over multiple respiratory cycles includes the physiological and behavioral characteristics of the person. According to the static temperature distribution and the dynamic temperature change of the specific position on the nose, the special breathing characteristic parameters of the person can be extracted, and the identity of the corresponding person can be identified. The above-mentioned breathing characteristic parameter includes two aspects: under the static condition, the temperature of different positions of the nose of one person is different, the temperature distribution of different persons has the characteristics of the temperature distribution, the extraction of the characteristics is relatively simple, and the characteristics can be directly obtained by an infrared thermal imager; the dynamic change characteristics of the nostril positions of a person during respiration are obtained by examining the statistical characteristics of the breathing form and sub-breathing form of the person in a plurality of respiratory cycles. The details of the breathing pattern and the sub-breathing pattern will be described later.
The physiological basis for identifying different persons in this embodiment is that the respiratory activity of different persons may exhibit different characteristics. The description of human breathing helps to understand this fact. The human body inhales external oxygen and exhales carbon dioxide through continuous breathing activity so as to ensure normal life activity of the human body. The respiratory system of the human body mainly consists of respiratory tracts and lungs, and the respiratory activity of the human body has periodicity. During different respiratory cycles, the lungs will experience a number of fixed physiological states in sequence. When inhaling, the respiratory muscle drives the bone and diaphragm of the chest to move outwards, the chest expands, the lung expands, and air is inhaled. At the end of the inspiratory maneuver, the intra-pulmonary pressure and atmospheric pressure equilibrate. Thereafter, the components of the thoracic cavity are elastically retracted, the thoracic cavity is contracted, and the air in the lungs is discharged out of the body, which is the exhalation process. After the expiration process is finished, the body will start the next inspiration process.
The regulation of lung activity in humans is accomplished by the associated neural apparatus. Such neural organs include the respiratory center and the intrapulmonary respiratory reflex device. The respiratory center is located in the medulla oblongata and consists of an inspiration center and an expiration center, wherein the inspiration center and the expiration center are alternately excited and suppressed, so that regular respiratory action is formed. Another neural organ that regulates the activity of the lungs is the respiratory reflex device within the lungs. The respiratory reflex in the lung is accomplished by the vagus nerve and intrapulmonary receptors. Pulmonary stretch receptors, J receptors and stimulators are three forms of pulmonary receptors in the lung, which are responsible for the rhythmic contraction of the lung and the regulation of the activity of the lung in special cases.
Different persons have different breathing characteristics due to different physiological structures and dynamic response characteristics of their respiratory organs and nervous tissues, and different persons have different breathing habits. These properties are expressed in the following ways: 1. different persons' breathing has their own characteristics under the same or substantially the same conditions, which may include the external environment, the state of motion, the breathing pattern, etc., and the observer may perceive the differences; 2. for the same person, the respiratory activity of the person can change under different conditions, but because the physiological structure of the person is fixed, the respiratory activity of the person in different modes still has certain similarity, and the similarity can be identified and extracted; 3. the unique biological characteristics of the breath of the same person have long-term stability and cannot be weakened along with the age. From the above facts, by extracting the characteristic intrinsic respiratory feature of a certain person, person identification can be efficiently performed.
The above-described breathing characteristics include both static temperature distribution and dynamic temperature changes, which are reflected in the thermal infrared image of the nose region during breathing. This phenomenon can be understood both by the physiological structure of the nose and by the physical principles of thermal infrared imaging.
The human nose can be divided into three parts of an external nose, a nasal cavity and a nasal sinus, and the parts related to the invention are the external nose and the nasal cavity. The external nose is the clearly visible part of the nose, which is located in the center of the face, in the form of a three-sided pyramid with the base below. The external nose has the following identifiable portions: the nasal root, the tip of the nose, the bridge of the nose, the back of the nose, the alar part of the nose, the nasolabial sulcus, the bottom of the nose, the columella nasi, the left anterior nares and the right anterior nares. The external nose mainly comprises a cartilage bracket and external nasal muscles, and meanwhile, the periphery of the external nose is rich in blood vessels, nerves and lymph. The shape and size of the external nose have significant individual, ethnic and regional differences, and as an example, the differences of the external noses of caucasians, caucasians and caucasians are given. The caucasian nose is high, the nasal root is narrow, the tip of the nose is small, and the nose is like an olecranon; the black-seeded person has low nasal bridge, wide nasal root and large nasal tip, and looks flat towards the skyhead; the yellow race is between the two. Different persons 'noses have different shapes, physiological structures, and have different heat generation and heat transfer characteristics, resulting in different persons' noses exhibiting distinct differences in thermal infrared images. In addition, for the same person, his nose has an intrinsic long-term stability in shape (temperature distribution) in the thermal infrared image when he remains still or moving slowly. Such a temperature distribution characteristic, which varies from person to person but has stability for the same person, can be applied as a characteristic to person identification. Such temperature distribution characteristics are referred to herein as static breathing characteristics, and the extracted characteristics are represented as a set of static breathing characteristic parameters.
Another nasal structure associated with this embodiment is the nasal cavity. In the composition structure of human nasal cavity, the nasal cavity is divided into left and right cavities, and the external (front) opening of each nasal cavity is called anterior naris. In addition, the nasal cavity is divided into nasal vestibule and intrinsic nasal cavity, with nasal threshold as the boundary. When looking inward from the anterior nares, the nasal vestibule, and a portion of the nasal mucosa, are visible. Similar to the activity of the lungs, the nose also experiences a variety of fixed physiological states periodically in sequence during each respiratory cycle. In addition, since the nose is the first link of the human body to exchange gas with the outside, the state change of the nose is more significant in respiration. Air inhaled from the outside, as well as air exhaled from the lungs, flows through the nasal and anterior nares, causing a significant drop or increase in the temperature of these two parts. The temperature changes of different nostril parts of people in the breathing process have different characteristics, which is also a manifestation of the uniqueness of the breathing of different people.
The physical principle of thermal infrared imaging is as follows: any object in the real world, including the human body, constantly radiates electromagnetic energy (spontaneous radiation) to the surroundings. At normal temperature, the spontaneous radiation of an object is mainly infrared radiation, also called infrared ray, which is light invisible to human eyes and has strong heat action. The infrared ray, visible light, ultraviolet ray, X ray, gamma ray, microwave and other radio electromagnetic wave constitute one infinite continuous electromagnetic wave spectrum, and the thermal infrared imaging system detects and displays the energy density distribution of infrared radiation with the different heat contrasts between the target and the environment caused by the temperature radiation and emissivity difference. In various thermal infrared imaging applications, it is often desirable to detect information about the radiation of the object itself and the temperature distribution of the object's surface. This detection mode does not require any form of artificial illumination and is therefore often referred to as passive imaging.
In terms of state detection and data acquisition, the thermal infrared imaging system has the following advantages: 1) infrared imaging belongs to non-contact detection, and can acquire target information at a longer distance to realize monitoring in a larger range; 2) the detected data can be accurately quantified, the measurement accuracy is usually within +/-2 ℃, and the temperature resolution is in the level of 0.01 ℃; 3) the intelligent degree is high, and long-time continuous operation under the unattended condition can be realized; 4) the device works by depending on the radiation of an object, does not need an auxiliary signal source and does not generate radiation to a human body; 5) the infrared ray can penetrate through fog and haze more than visible light, so that the thermal infrared imaging has a certain degree of all-day and all-weather working capacity; 6) there are many thermal infrared imaging devices that are solid, lightweight, and easy to carry and deploy. At present, thermal infrared imaging devices are widely used in military and civil applications. With the maturity of thermal infrared imaging technology, a plurality of low-cost thermal infrared imagers suitable for civil use have appeared, and all of the devices can be applied to the personnel identification provided by the invention.
As mentioned above, there are individual differences in breathing among people. One manifestation of such individual differences is that the change in the state of the nose during respiration of different persons has different characteristics. Periodic alterations of several states can be seen at the nostril site when viewed using a thermal infrared imaging device. The periodic variation process of the nostril area in the infrared image comprises the following steps: in the first stage, the chest of the body is expanded, the inspiration process begins, and ambient air enters the nasal cavity. This process will lower the temperature in the nasal cavity due to the lower temperature of the ambient air. In the thermal infrared image, the nostril part is continuously darkened, the area of the dark area is increased, and the change is continued until the chest is maximally expanded and stops (at the moment, the inspiration stops); in the second phase, the chest is contracted, the exhalation process is started, the gas flows out of the human body from the human lungs through the nose, the temperature in the nasal cavity is increased due to the high temperature of the gas, and in the thermal infrared image, the nostril part is continuously lightened, the area of the dark area is reduced, and the change is continued until the chest is contracted to the minimum (at the moment, the exhalation is stopped). The above described nostril region variation process in respiratory activity is similar for different people. However, since the breathing of a person varies from person to person, the above-described variations have characteristics that vary from person to person.
To describe the changes in the nostril region in the thermal infrared image, the evolution of the nostril may be generalized into several typical states, each referred to as a breathing pattern, each corresponding to an image state (temperature distribution) of the nostril region. During a breathing cycle, the image of the nostril region appears in sequence in several different states, each corresponding to a particular breathing configuration. For example, when the nostril position in the thermal infrared image is darkest (indicating that the temperature in the nasal cavity is lowest and the inhalation just stops), the shape corresponds to the 'inhalation stop'. When the nostril part is brightest (indicating that the temperature in the nasal cavity is highest and the exhalation is just stopped), the shape and the position correspond to the 'exhalation stop'. Besides "inhalation stop" and "exhalation stop" configurations, a person's breath also includes a variety of different breathing configurations. All of a person's respiratory cycles can be viewed as consisting of a number of different respiratory configurations similar to the "inhale stop" and "exhale stop" configurations described above. While in different respiratory cycles, there are different instances of the same respiratory pattern. In a respiratory cycle, instances of different respiratory configurations appear in a fixed order. The number of breathing patterns contained in the breathing cycle of different persons is the same, reflecting similar components in the breathing activity of different persons. At the same time, the same breathing pattern of different persons has different statistical characteristics, which reflect the person-to-person components of the breathing activity of different persons.
In addition, even if different instances of the same breathing pattern of the same person exist, there are usually some slight differences between them, and the probability of the occurrence of the instances with different characteristics is also different, so that in order to examine the breathing activity of one person more finely, the instances of the same breathing pattern need to be further subdivided, and such subdivision is called sub-breathing pattern. Similar to the breathing patterns, the number of sub-breathing patterns in the same breathing pattern of different persons is the same, but the characteristic parameters of the sub-breathing patterns in the same breathing pattern of different persons are different.
As described above, since there is an individual difference in breathing of a person, the dynamic change of the nostril region in the thermal infrared image also shows a characteristic that varies from person to person. Among these dynamic variation characteristics that vary from person to person, there is a breathing characteristic that is relatively stable for one person, and is referred to herein as a dynamic breathing characteristic parameter, to distinguish it from the aforementioned static breathing characteristic parameter. The stability of these dynamic breathing characteristics is manifested in that they are not affected by external conditions, the physiological state of the subject, and the particular breathing pattern. The dynamic respiratory characteristics are exhibited over a plurality of respiratory cycles of a person and have an intrinsic relationship to the respiratory configuration of the person and the statistical characteristics of the sub-respiratory configurations over the plurality of respiratory cycles. By examining the statistical characteristics of the breathing configuration and sub-breathing configuration of the person over a plurality of breathing cycles, a dynamic breathing characteristic parameter independent of the breathing cycle can be obtained.
The method identifies different persons based on the respiratory characteristic parameters, so that the extraction of the respiratory characteristic parameters is an important process. The process of extracting the breathing characteristic parameters comprises the following steps: acquiring a breathing data sample of the identified person; obtaining a static respiration characteristic vector and a dynamic respiration characteristic vector according to the respiration data sample; generating a respiratory main position cluster and a sub-respiratory position cluster; a breathing characteristic parameter for identifying the person is generated. This process will be described in detail below.
In order to extract a breathing characteristic parameter of a person, a breathing data sample of the person needs to be captured first. One sample of breath data is a set of frames of breath data captured by a breath recording device. These frames of breathing data correspond to a plurality of consecutive breathing cycles of a person, which can be used to extract breathing characteristic parameters of the person. In this embodiment, the person's raw breathing data is obtained by some form of thermal infrared imaging device, in the form of temperature distribution at the nostril site. The apparatus for acquiring respiratory data herein may take a variety of forms, collectively referred to as a breath recording device. The acquired respiratory data are thermal infrared images of the nose region at various sampling instants, called frames of respiratory data. The respiration recording apparatus and the respiration data frame will be described in detail later. In performing a particular person's registration, it may be desirable to capture multiple samples of respiratory data for that person and perform a process of extracting respiratory characteristic parameters on each sample to reduce the randomness of a single sample of respiratory data. In addition, when a person registers, a plurality of breathing data samples of the registered person breathing in different breathing modes may need to be captured, and breathing characteristic parameters shared by the person in different breathing modes are extracted, so that the dependence on the breathing mode in the later identification process is reduced.
Thereafter, the person's breath data samples are processed. The data processing process may involve one or more samples of respiratory data of the same person. Processing multiple respiratory data samples of the same person simultaneously improves the identification performance of the person, because multiple groups of respiratory characteristic parameters are extracted, and the respiratory characteristics of the person can be reflected more completely and accurately by the multiple groups of respiratory characteristic parameters. However, the system is still able to complete the registration or identification of the person even with only one sample of breathing data. In addition, the processing of a single sample of respiratory data is the basis for processing multiple samples of respiratory data. Therefore, the processing when there is only one sample of respiratory data will be described here with emphasis.
For a sample of respiratory data from a person, a static respiratory parameter vector and a dynamic respiratory parameter vector are extracted for each frame of respiratory data. Each obtained static breathing parameter vector comprises a plurality of static breathing parameters, and each dynamic breathing parameter vector comprises a plurality of dynamic breathing parameters. In one example, the static breathing parameters may include: the temperature of the left alar nose, the temperature of the right alar nose, the temperature of the left nasolabial sulcus, the temperature of the right nasolabial sulcus, the temperature of the columella, the temperature of the tip of the nose, the temperature of the lower half of the bridge of the nose, the air temperature of the current environment, and the like. The dynamic breathing parameters described above may include, but are not limited to: the frequency of the lowest temperature (or highest) occurring in the left nostril, the lowest temperature of the left nostril, the highest temperature of the left nostril, the frequency of the lowest temperature (or highest) occurring in the right nostril, the lowest temperature of the right nostril, the highest temperature of the right nostril, the tightness (roundness) of the maximum low temperature region of the left nostril, the tightness (roundness) of the maximum low temperature region of the right nostril, the ratio of the width (at the maximum) of the low temperature region of the left nostril to the width of the columella, the ratio of the width (at the maximum) of the low temperature region of the right nostril to the width of the columella, the time difference between the highest temperatures (at the lowest) reached by the nostrils on the left side and. In one breathing data sample, the distribution of all static breathing parameter vectors and dynamic breathing parameter vectors presents certain randomness, and can be regarded as a point cloud with certain probability distribution characteristics in a high-dimensional space.
Thereafter, based on a predetermined clustering method, all dynamic respiratory parameter vectors of the respiratory data sample are subjected to clustering analysis to form a plurality of respiratory configuration clusters, wherein each respiratory configuration cluster corresponds to a predetermined respiratory configuration to be identified. As previously described, a breathing pattern represents a particular state of the thermal infrared image of the nostril locations, which in turn reflects a particular phase in the breathing cycle. By carrying out cluster analysis on all dynamic breathing parameter vectors in a breathing data sample, the breathing characteristics can be summarized, and the inherent characteristics of the breathing of the person can be extracted from the breathing characteristics. After clustering is completed, all dynamic respiratory parameter vectors are divided into a plurality of predefined respiratory configuration clusters. These breathing configurations do not intersect, i.e. each dynamic breathing parameter vector belongs to one and only one kind of breathing configuration cluster. The division of the breathing pattern provides a basic description of the dynamic course of breathing.
As previously mentioned, a breathing pattern may be composed of a plurality of sub-breathing patterns. Therefore, all the respiratory morpheme clusters belonging to one respiratory data sample are subjected to the following operations: and based on a preset clustering method, performing clustering analysis on all dynamic respiratory parameter vectors in each respiratory configuration cluster again to form a plurality of sub-respiratory configuration clusters, wherein each sub-respiratory configuration cluster corresponds to a preset sub-respiratory configuration to be identified. Thus, a dynamic respiratory parameter vector belonging to a certain respiratory configuration further belongs to a certain sub-respiratory configuration. After each breathing pattern is subdivided into sub-breathing patterns, a more refined description of the characteristics of the person's breathing is obtained. And for the respiratory configuration and position clusters, taking one respiratory configuration and position cluster in the middle of the lower part and carrying out further clustering analysis to obtain a plurality of sub-respiratory configuration and position clusters.
Thereafter, a set of respiratory characteristic parameters of the person to be identified is determined based on the respiratory configuration and the statistical characteristics of the sub-respiratory configurations. The group of suction characteristic parameters comprise the following indexes: the center of all dynamic respiratory parameter vectors contained in each sub-respiratory configuration cluster, and the mean of all static respiratory parameter vectors contained in each respiratory configuration cluster. The breathing characteristic parameters obtained in this way are individual to individual and inherently stable for the same person.
In the present embodiment, the complete person identification process is divided into two stages, registration and identification. The two stages are respectively the process of adding new personnel information in the system and comparing the respiratory characteristic parameters of the personnel to be identified with the information in the system.
In the registration stage, the system acquires one or more breathing data samples of a person according to the above-mentioned process, extracts corresponding breathing characteristic parameters from the samples, and generates a model of the person based on the breathing characteristic parameters. A model is some form of representation of a breathing feature. Models in a system may be explicit or implicit depending on the manner of recognition employed by the system. Accordingly, the model base of the system may also be explicit or implicit (models of a plurality of registered persons in the system constitute the model base). Compared with the implicit model, the explicit model requires much more computing power than the implicit model when the same response speed is maintained, so that the system is more inclined to adopt the implicit recognition mode. One way of implicit recognition is to use a classifier (e.g., a neural network-based classifier, well known in computer science, see [ pattern classification ] (american) Richard o.dda et al, li hong dong et al, 2003. mechanical industry press. p230-p 283) to recognize the breathing characteristic parameters collected in real time. At this time, a model of a person may be embodied as a set of weights of a classifier, and the classifier with the set of weights may give a high output value when the identity is correct and a low output value when the identity is wrong. The weights may be obtained by training the classifier using a plurality of sets of breathing characteristic parameters belonging to a plurality of persons, each person having at least one set of breathing characteristic parameters. The training process may involve newly added breathing characteristic parameters, as well as existing breathing characteristic parameters in the system. The goal of the training is to make the classifier have a good ability to distinguish between breathing characteristic parameters from different persons, e.g. for breathing characteristic parameters from person a the classifier will give a higher output value at the class of person a and a lower output value at person B, whereas for breathing characteristic parameters from person B the classifier gives the exact opposite result. The training process may involve various methods such as basic back propagation algorithms, radial basis functions, matched filters, deep learning, etc. The trained classifier will be used to identify the person at a later date.
In the registration phase, in order to improve the recognition effect, it is sometimes necessary to train the classifier using a plurality of sets of respiratory feature parameters belonging to the same person. The reason for this is that due to the presence of noise, even different breath data samples of the same person obtained by the same breath recording device may have a certain difference in the obtained breath characteristic parameters. Such differences may degrade the performance of the person identification. And the plurality of groups of respiratory characteristic parameters (obtained according to a plurality of respiratory data samples from the same person) more accurately and completely reflect the intrinsic respiratory characteristics of the person, and the classifier training is completed by the plurality of groups of respiratory characteristic parameters, so that the influence of noise can be weakened, and a model which fully reflects the individual respiratory characteristics of the person is obtained.
The identification process of the system comprises the following steps: collecting a breathing data sample of a person to be identified, and extracting breathing characteristic parameters of the person based on the breathing data sample; comparing the newly acquired respiratory characteristic parameters with existing models (generally a plurality of models) in the system to obtain a quantified conformity degree corresponding to each model; based on these degrees of compliance, decision analysis is then performed by some decision logic to determine the identity of the person being identified. The basic principle of decision analysis is a model with high conformity in the calculation result, and more likely to be a real identity. When the classifier is used for identification, firstly, the breathing characteristic parameters from a certain person are input into the trained classifier, the classifier generates a quantitative response value for the existing person identities in the system, and the response value corresponding to the correct person identity is higher than the response values corresponding to other identities.
The identification here has two forms of authentication and confirmation: the identification means that the identity of the identified person is identified under the condition that no assumption is made about the identity of the identified person, and the identification mode needs to check the coincidence condition of the newly obtained breathing characteristic parameters and all models in the system, and takes the model with the highest coincidence degree as an identification result. And the confirmation means that the person to be identified firstly makes an identity claim, then the system acquires the breathing characteristic parameters of the person and checks the compliance of the breathing characteristic parameters with the claimed model, and if the compliance degree is higher than a certain threshold value, the person is considered to have the claimed identity. Generally, authentication requires a high level of identification systems.
In recognition, the decision of the system varies according to the kind of test performed by the system: in the closed set test, the identified person must be registered in the system, namely the system defaults that no impostor exists, and only needs to find out the person most similar to the current person in the model library as the identification result; in the open set test, the speaker may not be registered in the system, i.e. the possibility of the speaker being a pseudonym is not excluded, and the system not only needs to find the most similar person in the model library, but also needs to examine the similarity degree to determine whether the speaker is a pseudonym.
The person identification method based on breathing in the embodiment has the following advantages: the respiratory characteristic parameters of a person reflect the essential characteristics of the respiration of the person, and the characteristics are independent of a specific respiratory mode and do not change along with the external environment and the temporary physiological response of the person, so the identification method has strong robustness. In addition, in the system, the respiratory state of the person is obtained based on the thermal infrared image of the nose part, and the thermal infrared image is collected at a relatively long distance without the direct contact between the person and the equipment, so that the recognition process has less interference to the recognized person.
The person identification system in this embodiment may be implemented based on different computing devices (any number), transmission environments, and/or configurations. The computing device may be a notebook computer, a desktop computer, a workstation, a mainframe, a server, a tablet computer, a smart phone, a smart appliance, etc. The system identifies the person based on his breathing characteristics.
The system includes a breath recording device. For a person in their field of view, the breath recording device captures one or more samples of their breath data. Wherein each breath data sample comprises a number of breath data frames covering a temperature variation process of the nose portion of a person over a plurality of consecutive breath cycles. The samples of breath data captured by the breath recording device will be processed by the system. The breath recording device described herein may be implemented in a variety of ways, but the results obtained are the same for different implementations, resulting in frames of breath data.
The first implementation of the above-described respiration recording apparatus is a thermal infrared imaging device. The device collects the respiratory data of a stationary person located directly in front of the device, and identifies the person. By adjusting the position, orientation and focal length of the thermal infrared imaging device, the nose of the person can be captured at a predetermined angle, and the nose in the image has higher resolution. The identification process comprises the following steps: in a first step, successive image acquisitions of the thermal infrared imaging device are started by some initialization device and/or initialization routine in the computer, the acquired images including the nose of the person to be identified. The initialization device or routine has the characteristics of low power consumption or low CPU occupancy rate and can continuously run; and secondly, determining the nose area of the person in all the thermal infrared images, cutting the images into the area, wherein the cut thermal infrared images are the respiratory data frames. Thereafter, all the respiratory data frames of the person are processed according to the method, and finally a group of respiratory characteristic parameters of the person are obtained, and the person identification is completed based on the characteristic parameters.
The process of locating the nose in the thermal infrared image, and cropping the image to the area, is automated by means of a computer. The basic operation is to determine the state of the nose (position, size, orientation, etc.), which can be achieved using object tracking techniques in computer vision, which is a fundamental technique in the field of computer vision. The process of tracking the nose in the thermal infrared image comprises the following steps: in a first step, a nose is detected, either manually or automatically, in a first frame of thermal infrared images. In the case of automatic nose detection, the system acquires samples of thermal infrared images of the noses of a large number of different persons in advance, extracts general features of the noses from the samples, and then realizes nose detection based on the general features. The general features referred to herein may be structural features such as two approximate circles at the nares, a semi-circular dark block at the alar region, etc., or any other suitable features. And a second step of acquiring the characteristics of the nose with the state of the nose detected in the first frame image as an initial state, and constructing a description model of the nose. The third step, in the subsequent images, the description model of the nose is utilized, and some methods, such as statistical filtering or density estimation, are adopted to estimate the current state of the nose, and meanwhile, the current state of the nose is utilized to update the target model.
In one implementation, the person identification system includes a breathing modeler and an identifier. The system receives a plurality of respiratory data samples of a person from a respiratory recording device, wherein each respiratory data sample comprises a certain number of respiratory data frames and covers the temperature change process of the nose part of the person in a plurality of continuous respiratory cycles. Each such sample of breathing data will be provided to a breathing modeler, which processes it. As previously described, for each sample of breath data, the breath modeler will complete in turn: extracting a respiratory parameter vector; respiratory configuration clustering and sub-respiratory configuration clustering; and extracting the dynamic respiration characteristic parameters and the static respiration characteristic parameters and the like to finally obtain a group of respiration characteristic parameters. The breathing characteristic parameters corresponding to all the breathing data samples will eventually be provided to the identifier. The recognizer has two main functions: in the registration stage, a classifier is trained by using the group call characteristic parameters, so that the classifier has good distinguishing capability on registered personnel; in the identification phase, the classifier is used for person identification.
In one implementation, a system includes a processor. The processor may be one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. The processor may retrieve software instructions stored in the memory and execute the instructions.
The system includes an interface. The interface here may be various software instruction based interfaces or hardware based interfaces. Using these interfaces, the system can communicate with other devices (servers, data sources, external data repositories, etc.). In addition, the system may communicate with other communication devices via a communication network.
The system includes a memory. The memory may be coupled to the processor. The memory may be any computer-readable medium, including volatile memory and/or non-volatile memory: the former such as Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM); such as Read Only Memory (ROM), Erasable Read Only Memory (EROM), erasable programmable read only memory (eprom), flash memory, a hard disk, an optical disk, a magnetic tape, etc.
The system includes modules and data. Modules and data may be coupled to the processor. Modules may include various routines, programs, objects, components, data structures, that perform particular tasks or implement particular data types. Modules may be implemented based on various hardware, which may include: signal processors, state machines, logic circuitry, and any devices or components that manipulate data in accordance with operational instructions. The data is used to store data that is acquired, received, or generated by the module.
A module may be hardware, a set of software instructions, or a combination of hardware and software. For the case where the module is hardware, the module is implemented as a number of devices or means such as computers, processors, state machines, logic arrays, etc. Where a module is a set of software instructions, the module is a set of instructions that, when executed by a processing unit, perform any desired function. The software instructions described above may be stored in a variety of storage devices. In addition, the software instructions may be downloaded over a network, as desired.
In one implementation, the modules include a feature extractor, a breathing modeler, a recognizer, and other modules. These modules are all executed by the system. The data module includes respiratory data, extracted feature data, cluster data, classifier data, and other data. These data modules are used to store data from the modules.
The identification process of the system to the person will be described below, and the core of this identification process is to perform cluster analysis on the breathing positions and the sub-breathing positions and further extract dynamic breathing characteristic parameters and static breathing characteristic parameters.
As previously mentioned, for a person in the field of view of the breath recording device, the breath recording device will capture several of its number of breath data samples. Each such sample contains a number of frames of breath data covering a continuous plurality of breath cycles of a person. These breath data samples will be sent to the system. In one example, a breath recording device performs data capture on a person and generates and provides to the system a breath data sample that includes 3000 frames of breath data. These frames of respiratory data may be denoted as f1, f2, f3, …, f 3000. These frames of respiratory data will be further processed by the system.
After the system receives the sample of respiratory data, feature extractor 212 processes all frames of respiratory data in the sample to extract a static respiratory parameter vector and a dynamic respiratory parameter vector from each frame of respiratory data. The static breathing parameter vector in each frame comprises a plurality of static breathing parameters, and the dynamic breathing parameter vector comprises a plurality of dynamic breathing parameters. The nose temperature data, the static breathing parameter vector, and the dynamic breathing parameter vector are stored in the nose temperature and breathing parameter data 216, respectively.
In one example, a static breathing parameter vector contains static breathing parameters, denoted as [ ds1 … ds12 ]; one dynamic breathing parameter vector contains 14 dynamic breathing parameters, denoted as [ df1 … df14 ]. Thus, for 3000 frames of respiratory data (from f1 to f 3000) in the above sample, there are 3000 static respiratory parameter vectors and 3000 dynamic respiratory parameter vectors. All of the static and dynamic breathing parameter vectors can be represented as a two-dimensional matrix with a rank of [3000 x (14 +12) ].
The process of modeling the breathing patterns and sub-breathing patterns is described below. All dynamic respiratory parameter vectors derived from the respiratory data samples are cluster analyzed by the respiratory modeler to form a plurality of respiratory morpheme clusters, wherein the number of respiratory morpheme clusters may range from 2 to 10.
In one example, the number of breathing configurations to be identified is 3. The breathing modeler, for all dynamic breathing parameter vectors for frames f1 to f3000, performs a K-means clustering algorithm, creating 3 clusters. For all dynamic breathing parameter vectors in the multidimensional space, the breathing modeler 110 randomly selects three dynamic breathing parameter vectors (VA, VB, VC) as initial clustering centers and calculates euclidean distances D (Vi, VA), D (Vi, VB), D (Vi, VC) of the other dynamic breathing parameter vectors from VA, VB, VC by D (Vi, V ' k) = (Σ (Vij-V ' kj) ^2) ^0.5, where Vi denotes a certain dynamic breathing parameter vector to be classified (i =1, … 3000), V ' k is a certain initial clustering center (k =1,2,3), Vij denotes a component of the j-th dimension of Vi (j =1, … 14), V ' kj denotes a component of the j-th dimension of V ' k (j =1, … 14). After all Euclidean distances are calculated, clusters are created around three initial cluster centers, namely, for each dynamic respiration parameter vector Vi to be classified, the initial cluster center closest to the dynamic respiration parameter vector Vi is searched, and the Vi is divided into classes corresponding to the initial cluster centers. After the initial partitioning is complete, all dynamic respiratory parameter vectors are partitioned into three clusters. Thereafter, the breathing modeler 110 calculates the mean center Mk of all dynamic breathing feature vectors of each cluster by using the method Mk = (Σ Vi)/Nk, where Vi is a certain dynamic breathing parameter vector contained in the kth cluster (i =1, … Nk), and Nk is the total number of dynamic breathing parameter vectors in the kth cluster (k =1,2, 3). The three initial cluster centers, V' k, are replaced with the three calculated means. After modifying the cluster centers, the breathing modeler 110 creates clusters again around the three modified cluster centers. In the manner described above, the respiratory modeler 110 continually modifies the three cluster centers and creates modified clusters around the modified cluster centers until the cluster centers no longer change. In this way, three clusters are obtained, each corresponding to a recognizable breathing pattern. Also, each cluster contains several points that represent separate dynamic breathing parameter vectors.
In order to describe the breathing condition of the person 104 more precisely, each breathing position cluster is further subjected to clustering analysis again by the breathing modeler, and finally a plurality of sub-breathing position clusters are formed in each breathing position cluster. In one implementation, the number of sub-respiratory sites is possible in a range from 2 to 10.
In one example, the number of sub-respiratory configurations to be identified is 8. For each respiratory morpheme cluster, the respiratory modeler performs a K-means clustering algorithm on the dynamic respiratory parameter vectors therein to create 8 sub-clusters. Thus, for 3 respiratory position clusters, a total of 24 sub-respiratory position clusters are obtained. Each resulting sub-cluster corresponds to a sub-breath position that can be identified in a certain breath position. Meanwhile, each sub-cluster contains a certain number of dynamic breathing parameter vectors.
The above-described respiratory morpheme clustering and sub-respiratory morpheme clustering processes may use any clustering method, including but not limited to: a K-means clustering method and a fuzzy C-means clustering method. Data relating to the above-described breathing position clusters and sub-breathing position clusters is stored in the cluster data.
After the clusters and sub-clusters are created, a set of respiratory characteristic parameters of the person to be identified is determined by a respiratory modeller. The specific method comprises the following steps: for each sub-breathing configuration cluster described above, the centers of all the dynamic breathing parameter vectors it contains are calculated. The center of a sub-respiratory position cluster can be represented as a 14-dimensional vector, and is set as VDij, where i is the number of the respiratory position cluster to which the vector belongs, and j is the number of the sub-respiratory position cluster to which the vector belongs in the respiratory position cluster. Thus, if there are 8 sub-respiratory position clusters in a respiratory position cluster, the center of these sub-respiratory position clusters can be represented as a two-dimensional matrix of [8x 14], set to WDi, where i is the number of the respiratory position cluster. WDi is composed of 8 vectors [ VDi1 VDi2 … VDi8], WDi is the dynamic respiration characteristic parameter corresponding to the ith respiration configuration. In addition, the mean of the static respiratory parameter vectors in each respiratory morpheme cluster is calculated by the respiratory modeler. For each respiratory morpheme cluster, the mean can be represented as a 12-dimensional vector, set to VSi, where i is the number of the respiratory morpheme cluster. VSi is the static respiratory characteristic parameter corresponding to the ith respiratory configuration. This mean value is calculated for each respiratory morpheme cluster and then appended to the dynamic respiratory parameter vector VDi for that cluster. This gives a data set corresponding to each breathing position cluster, which can be represented as a two-dimensional matrix, set to Wi, where i is the number of the breathing position cluster. Wi is known as rank [8x (14 +12) ]. Further, assuming a person has 3 breath position clusters, the complete breath data set for the person can be represented as a two-dimensional matrix with a rank of [3 x 8x (14 +12) ], i.e., [24 x 26 ]. This data set contains dynamic breathing characteristic parameters and static breathing characteristic parameters, called breathing characteristic parameters, corresponding to the breathing data samples. In general, a complete set of respiratory characteristic parameters may be represented as a two-dimensional matrix with a rank [ (number of respiratory positions to be identified) x (number of sub-respiratory positions to be identified in a respiratory position) x (total number of parameters in the static respiratory parameter vector + total number of parameters in the dynamic respiratory parameter vector) ]. Data relating to the breathing characteristic data is stored in the characteristic classifier data.
In summary, the method for acquiring the breathing characteristic parameter by the breathing modeler 110 includes: performing cluster analysis on the dynamic respiratory parameter vectors provided by the feature extractor to create a predetermined number of respiratory configuration clusters; for each breathing configuration and position cluster, performing cluster analysis on all dynamic breathing parameter vectors in the breathing configuration and position cluster again to form a preset number of sub-breathing configuration and position clusters; calculating the following indexes for the breath data sample to determine the breath characteristic parameter corresponding to the sample: the center of all dynamic respiratory parameter vectors in each sub-respiratory configuration cluster (used to generate dynamic respiratory characteristic parameters), and the mean of the static respiratory parameter vectors in each respiratory configuration cluster (used to generate static respiratory characteristic parameters). As previously mentioned, a complete respiratory characteristic parameter is a two-dimensional matrix with a rank [ (the number of respiratory positions to be identified) x (the number of sub-respiratory positions to be identified in each respiratory position) x (the total number of parameters in the static respiratory parameter vector + the total number of parameters in the dynamic respiratory parameter vector) ].
The method for implementing the breathing form and position clustering and the sub-breathing form and position clustering may be a K-means clustering method, a fuzzy C-means clustering method, or any other clustering method.
By repeating the processes of collecting the breathing data samples, extracting the breathing parameter vectors, clustering analysis and extracting the breathing characteristic parameters for a person to be identified, a plurality of groups of breathing characteristic parameters of the person can be obtained, and all the groups of breathing characteristic parameters reflect the individual breathing characteristics of the person. The significance of extracting multiple groups of breathing characteristic parameters is as follows: the influence of noise cannot be completely avoided in any data acquisition process, and the process of acquiring the breathing characteristic parameters of the system is not exceptional. If a model for representing a certain person is generated only by a single group of breathing characteristic parameters of the person, the influence of noise in the generation process of the breathing characteristic parameters in the model is likely to be obvious, and if the model of the person is generated by a plurality of groups of breathing characteristic parameters of the person, the generated model can better eliminate noise and accidental factors, has better representativeness and stability, and further improves the performance of the identification system.
After the breathing modeler determines one or more sets of breathing characteristic parameters for a person to be identified, a classifier is trained by the identifier 112 based on the breathing characteristic parameters to provide the classifier with good discriminative power for the registered person. Completing the training means that the registration of the person is completed, after which the classifier can be used by the recognizer to recognize the person in real time. The classifier may include a supervised learning algorithm, such as a Support Vector Machine (SVM) classifier, bayesian estimation, decision trees, neural networks, and the like. The registration of a plurality of persons is performed in order according to the above-described flow.
Once the registration of a person is completed, the system can identify it at any time thereafter. The on-site real-time identification of the person in the system is performed by the identifier 112 (based on the respiratory characteristic parameters of the person to be identified). The process of extracting the breathing characteristic parameters of the person to be identified comprises the following steps: recording a breath data sample of the person to be identified by a breath recording device; extracting static respiratory parameters and dynamic respiratory parameters of the sample of respiratory data by a feature extractor 212; obtaining, by the breath modeler 110, a set of breath characteristic parameters for the breath data sample; finally, the set of breathing characteristic parameters is input into the identifier 112, and the degree of coincidence between the extracted breathing characteristic parameters and each existing model in the database is calculated by the identifier.
One application scenario of the invention is the automatic identification of a vehicle driver. The specific implementation method comprises the following steps: several thermal infrared imaging devices are installed at specific locations of the car, for example below the windshield, as a respiration recording device. The position and the focal length of the thermal infrared imaging device(s) are adjusted in advance, so that the thermal infrared imaging device can acquire the nose part of a person standing at a specific position in front of the vehicle. An initialization routine is present in the processor of the recognition system, which may be based on the operation of the thermal infrared imaging device described above, and the principle is to monitor the temperature distribution in the environment, and when the temperature distribution meets certain conditions (e.g. there is a large area of higher temperature, such as the temperature of a human face), the initialization routine considers that a person has been detected, at which point the thermal infrared imaging device starts object (nose) detection and performs the subsequent recognition operation. For example, when a person wants to unlock the car, he first walks to a predetermined position and stands facing the thermal infrared imaging device. The initialization routine detects the presence of a person and activates the thermal infrared imaging device to acquire a plurality of successive thermal infrared images over a period of time that includes the nose region of the person to be identified. Then, the system automatically processes the thermal infrared images, extracts the respiratory characteristics, and then compares the extracted characteristics with a plurality of respiratory characteristic models stored in a database. And finding out a model with the highest matching degree of comparison, and determining the identity of the person on the current driving seat. In this way, the problem of being greatly affected by illumination when face recognition is performed with a visible light image can be avoided. In addition, the potential risk of face recognition in a visible light mode can be effectively avoided. After the person identification is completed, several intelligent operations can be performed, such as automatic car unlocking or automatic seat adjustment, etc.
Another application scenario of the present embodiment is mobile phone user identification. The specific implementation method comprises the following steps: a small thermal infrared imaging device is arranged at the upper end or the lower end of one side of a screen of the mobile phone. The position and the focal length of the thermal infrared imaging equipment are adjusted in advance, so that the thermal infrared imaging device can acquire the nose part of a person holding the mobile phone. When one wants to unlock the phone, one needs to acquire a thermal infrared image that contains the breathing situation for a period of time. Then, the system automatically processes the thermal infrared images, extracts the breathing characteristics, and then compares the extracted breathing characteristics with the models in the database. The model with the highest degree of matching of the comparison may be determined as the identity of the current person. By the method, the problem of large illuminated image when the face recognition is carried out by the visible light image can be avoided, and the potential risk of carrying out the face recognition in the visible light mode can be effectively avoided.
It should be noted that the above description of the person identification system and the associated method is merely a specific implementation of the basic principles involved, and the features or methods described herein are not limiting on the present invention.

Claims (4)

1. A person identification method based on nose image characteristics in a respiratory process is a method for identifying persons by thermal infrared image characteristics, wherein in a plurality of respiratory cycles of a person, the distribution of the temperature of the nose in a time domain and a space domain contains the physiological and behavior characteristics of the person, the unique respiratory characteristic parameters of the person are extracted according to the static temperature distribution and the dynamic temperature change of a specific position on the nose, and the identity of the corresponding person is further identified Comparing respiratory parameter characteristics, the nasal static respiratory parameters comprising: the static breathing parameters of the nose further comprise the temperature of the columella nasi, the temperature of the tip of the nose and the temperature of the lower half part of the nose bridge; the nasal dynamic respiratory parameters include: the method comprises the steps that the lowest or highest temperature frequency of the left nostril occurs in a plurality of breathing cycles, the lowest temperature of the left nostril and the highest temperature of the left nostril, the dynamic nose breathing parameters further comprise the lowest or highest temperature frequency of the right nostril, the lowest temperature of the right nostril, the highest temperature of the right nostril, the maximum low temperature area of the left nostril and the maximum low temperature area of the right nostril, the dynamic nose breathing parameters further comprise the ratio of the maximum low temperature area width of the left nostril to the width of the columella nasalis, the ratio of the maximum low temperature area width of the right nostril to the width of the columella nasalis and the time difference of the left nostril and the right nostril reaching the highest or lowest temperature;
the nose static breathing parameter feature is that difference calculation is carried out on the nose static breathing parameters of a plurality of breathing cycles and the current environment temperature, and the average value of the difference is used as the nose static breathing parameter feature; the nose dynamic respiration parameter feature is that the nose dynamic respiration parameters of a plurality of respiration cycles are subjected to cluster analysis, and dynamic respiration form positions generated after the cluster analysis are clustered to be used as the nose dynamic respiration parameter feature; the consecutive plurality of respiratory cycles is at least 10.
2. The method for identifying people based on the image characteristics of the nose during the respiration process according to claim 1, wherein the dynamic respiration morphology position clusters comprise main morphology clusters and sub-morphology clusters, wherein:
the main configuration clustering is a plurality of primary configuration clustering obtained by clustering and analyzing vectors formed by the nasal dynamic breathing parameters of the plurality of breathing cycles;
and the sub-configuration clustering is a plurality of secondary configuration clustering obtained by carrying out clustering analysis on the plurality of primary configuration clustering again.
3. The method of claim 1, wherein the similarity threshold is at least 70%.
4. The method for identifying persons based on the features of the nose image in the respiratory process according to claim 1 or 2, wherein the cluster analysis is a cluster analysis performed by a K-means clustering method or a fuzzy C-means clustering method.
CN201610919594.5A 2016-10-21 2016-10-21 Personnel identification method based on nose image features in breathing process Active CN106485232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610919594.5A CN106485232B (en) 2016-10-21 2016-10-21 Personnel identification method based on nose image features in breathing process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610919594.5A CN106485232B (en) 2016-10-21 2016-10-21 Personnel identification method based on nose image features in breathing process

Publications (2)

Publication Number Publication Date
CN106485232A CN106485232A (en) 2017-03-08
CN106485232B true CN106485232B (en) 2020-10-30

Family

ID=58269881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610919594.5A Active CN106485232B (en) 2016-10-21 2016-10-21 Personnel identification method based on nose image features in breathing process

Country Status (1)

Country Link
CN (1) CN106485232B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180060553A1 (en) * 2016-08-29 2018-03-01 Lenovo (Singapore) Pte. Ltd. Using eddy currents of exhaled breath for authentication
CN107736874B (en) 2017-08-25 2020-11-20 百度在线网络技术(北京)有限公司 Living body detection method, living body detection device, living body detection equipment and computer storage medium
CN109998496A (en) * 2019-01-31 2019-07-12 中国人民解放军海军工程大学 A kind of autonomous type body temperature automatic collection and respiratory monitoring system and method
US11941896B1 (en) * 2020-05-20 2024-03-26 Kismet Technologies Inc. System and method for alerting and monitoring health and wellness conditions of an individual within a defined environment
CN111598047B (en) * 2020-05-28 2023-06-27 重庆康普达科技有限公司 Face recognition method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103141080A (en) * 2010-09-23 2013-06-05 索尼电脑娱乐公司 User interface system and method using thermal imaging
CN104053469A (en) * 2011-11-15 2014-09-17 瑞思迈有限公司 Nasal mask system
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105962897A (en) * 2016-04-27 2016-09-28 南京理工大学 Self-adaptive snoring sound signal detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7111980B2 (en) * 2001-04-19 2006-09-26 Honeywell International Inc. System and method using thermal image analysis and slope threshold classification for polygraph testing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103141080A (en) * 2010-09-23 2013-06-05 索尼电脑娱乐公司 User interface system and method using thermal imaging
CN104053469A (en) * 2011-11-15 2014-09-17 瑞思迈有限公司 Nasal mask system
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105962897A (en) * 2016-04-27 2016-09-28 南京理工大学 Self-adaptive snoring sound signal detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人体行为识别关键技术研究;何卫华;《中国博士学位论文全文数据库 信息科技辑》;20130215;第2013年卷(第02期);第I138-25页 *

Also Published As

Publication number Publication date
CN106485232A (en) 2017-03-08

Similar Documents

Publication Publication Date Title
CN106485232B (en) Personnel identification method based on nose image features in breathing process
CN106295313B (en) Object identity management method and device and electronic equipment
Zhao et al. Face recognition: A literature survey
Pentland et al. Face recognition for smart environments
CN110298221B (en) Self-help fitness method and system, electronic equipment and storage medium
Chellappa et al. Recognition of humans and their activities using video
US20050207622A1 (en) Interactive system for recognition analysis of multiple streams of video
US20080260212A1 (en) System for indicating deceit and verity
CN111540105A (en) Method, system, equipment and storage medium for controlling access control
CN109299690B (en) A method that can improve the accuracy of video real-time face recognition
Zhao et al. Transferable self-supervised instance learning for sleep recognition
Agarwal et al. Facial expression recognition through adaptive learning of local motion descriptor
Lanitis et al. Quantitative evaluation of the effects of aging on biometric templates
Chiu et al. A micro-control capture images technology for the finger vein recognition based on adaptive image segmentation
Nguyen et al. EEG-based person verification using multi-sphere SVDD and UBM
US9530049B2 (en) Kinetic-based tool for biometric identification, verification, validation and profiling
Bui et al. Personalized breath based biometric authentication with wearable multimodality
Abe et al. A novel local feature for eye movement authentication
Patil et al. Analysis of facial expression using deep learning techniques
TW201839635A (en) Emotion detection system and method
Alom et al. Optimized facial features-based age classification
Mehmood et al. A survey on various unimodal biometric techniques
Nandy et al. A novel approach to human gait recognition using possible speed invariant features
CN111428597A (en) Attendance management method based on face recognition
Pinčić Gait recognition using a self-supervised self-attention deep learning model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Xiao Shuming

Inventor after: Cheng Yan

Inventor after: Chen Qi

Inventor after: Liu Yongqing

Inventor after: Hu Qi

Inventor after: Wang Jianming

Inventor after: Liang Zhimin

Inventor after: Shen Weizhen

Inventor after: Chu Hongwei

Inventor before: Xiao Shuming

Inventor before: Chen Qi

Inventor before: Liang Zhimin

Inventor before: Wang Jianming

Inventor before: Zhen Qingkai

Inventor before: Mo Zengliang

Inventor before: Liu Yongqing

Inventor before: Tian Yuan

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant