[go: up one dir, main page]

CN120131005A - Indoor person fall detection method, device, equipment and storage medium - Google Patents

Indoor person fall detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN120131005A
CN120131005A CN202510624570.6A CN202510624570A CN120131005A CN 120131005 A CN120131005 A CN 120131005A CN 202510624570 A CN202510624570 A CN 202510624570A CN 120131005 A CN120131005 A CN 120131005A
Authority
CN
China
Prior art keywords
temperature distribution
target person
distribution data
frame
variance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510624570.6A
Other languages
Chinese (zh)
Other versions
CN120131005B (en
Inventor
朱家旗
戴宁
段俊丽
林宗涛
邹武合
阎禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Institute of Advanced Studies of UCAS
Original Assignee
Hangzhou Institute of Advanced Studies of UCAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Institute of Advanced Studies of UCAS filed Critical Hangzhou Institute of Advanced Studies of UCAS
Priority to CN202510624570.6A priority Critical patent/CN120131005B/en
Publication of CN120131005A publication Critical patent/CN120131005A/en
Application granted granted Critical
Publication of CN120131005B publication Critical patent/CN120131005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of signal processing and discloses a method, a device, equipment and a storage medium for detecting falling of indoor personnel, wherein the method comprises the steps of collecting temperature distribution data of the target personnel through an infrared array sensor deployed in a target indoor space; the method comprises the steps of extracting transverse morphological characteristics and longitudinal distance characteristics of a target person based on temperature distribution data, wherein the transverse morphological characteristics are used for representing spatial variation of human body gestures, the longitudinal distance characteristics are used for analyzing dynamic evolution of action trends, and inputting the fused characteristic set into a classification model to output a falling state judging result. According to the invention, non-contact data acquisition is realized through an infrared sensing technology, and the accuracy of falling detection is effectively improved by combining a transverse and longitudinal characteristic fusion and classification algorithm.

Description

Method, device, equipment and storage medium for detecting falling of indoor personnel
Technical Field
The invention relates to the technical field of signal processing, in particular to a method, a device, equipment and a storage medium for detecting falling of indoor personnel.
Background
Indoor fall monitoring is increasingly important for life safety. In the prior art, fall detection schemes are mainly divided into wearable and non-wearable types. The wearable device is required to be worn by a human body to wear a sensor, so that the problems of inconvenience in use and high false alarm rate exist, and a method based on image or sensor data analysis is generally adopted in a non-wearable scheme, but the wearable device is limited by environmental interference or data processing complexity, so that high-precision real-time monitoring is difficult to realize.
Therefore, a need exists for a non-contact fall detection method that can overcome the drawbacks of the prior art and improve the fall detection accuracy.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, device and storage medium for detecting falling of indoor personnel, so as to improve the accuracy of falling detection.
In a first aspect, the invention provides a method of detecting a fall of an indoor person, the method comprising:
Acquiring temperature distribution data of a target person in a target indoor space, wherein the temperature distribution data is acquired through an infrared array sensor deployed in the target indoor space;
based on the temperature distribution data, extracting transverse morphological characteristics and longitudinal distance characteristics of the target personnel to obtain a fusion characteristic set, wherein the transverse morphological characteristics are used for representing the posture change of the target personnel;
And inputting the fusion feature set into a classification model to obtain a falling state judgment result of the target person.
According to the indoor personnel falling detection method provided by the invention, the infrared array sensor deployed in the indoor space of the target is used for collecting the temperature distribution data of the target personnel in real time, the non-contact thermal imaging technology is used for replacing the traditional wearing equipment or microwave radar, the inconvenience of close-fitting use of the sensor is avoided, the infrared array sensor is used for collecting only low-resolution heat source distribution, a high-definition image or electromagnetic wave reflection signal is not needed, and the hardware deployment complexity is simplified. By extracting transverse morphological characteristics (reflecting human body gestures) and longitudinal distance characteristics (reflecting action trends) and constructing a fusion characteristic set, the composite characteristics of falling behaviors can be comprehensively captured from two dimensions of spatial distribution and time sequence evolution, compared with a single-dimension characteristic description method (such as analysis speed or acceleration only), the misjudgment probability caused by partial characteristic blurring is remarkably reduced, the fusion characteristic set is further input into a classification model for intelligent judgment, the complex situation of the falling behaviors can be dealt with, the problem of insufficient sensitivity caused by environmental interference or individual difference of a traditional threshold rule method is solved, and therefore the falling detection accuracy of indoor personnel is improved on the premise of non-invasive detection.
In an alternative embodiment, before extracting the transverse morphological feature and the longitudinal distance feature of the target person, the method further comprises:
calculating variance of temperature distribution data of the continuous frames;
when the variance of the temperature distribution data of two or more continuous frames is detected to exceed the variance threshold, the transverse morphological characteristics and the longitudinal distance characteristics of the target person are extracted.
According to the method for detecting the falling of the indoor personnel, provided by the invention, the variance threshold is set, and the feature extraction (such as the intense action of a human body) is started only when the obvious temperature change is detected, so that false triggering caused by the tiny fluctuation of the ambient temperature (such as air-conditioning airflow and pet movement) is avoided, and the false alarm rate of a system is obviously reduced. The continuous multi-frame judgment (two frames or more) can filter transient noise interference (such as transient heat source interference) and improve the reliability of feature extraction. Feature extraction is performed only when an active action occurs, reducing the amount of invalid data processing.
In an alternative embodiment, extracting the transverse morphological feature of the target person includes:
acquiring the number of temperature pixels exceeding a preset room temperature threshold value in each frame of temperature distribution data, and comparing the number with the total frame number to obtain the average effective number of pixels;
Performing contour analysis on active pixel points in the continuous activated frames to extract morphological characteristics;
Determining the effective action area of the human body action of the target person based on the morphological characteristics;
Based on the average effective pixel point number and the effective action area, obtaining the transverse morphological characteristics of the target personnel;
The continuous activation frame is a frame sequence between a start frame and a stop frame, wherein the start frame is a frame in which the variance of the temperature distribution data of two continuous frames is detected to exceed a variance threshold for the first time, and the stop frame is a frame in which the variance of the temperature distribution data of two continuous frames is detected to be lower than the variance threshold for the first time.
According to the indoor personnel falling detection method provided by the invention, the average effective pixel point number and the effective action area are extracted, the average effective pixel point number reflects the action persistence and the space coverage, and the action type distinguishing capability is enhanced. And the start and stop time of the action is recorded by the activation frame, so that the time sequence feature identification precision of the falling action is improved. The effective action area maps the human body posture change into the geometric parameter through the pixel point density and the outline approximation, so that misjudgment caused by only depending on a single parameter (such as speed) in the traditional method is avoided, and the sensitivity to the falling amplitude is enhanced.
In an alternative embodiment, extracting the longitudinal distance feature of the target person includes:
Obtaining the maximum temperature distribution variance according to the maximum value in the variances of the temperature distribution data of each frame;
Acquiring the number of pixels when the variance of the temperature distribution data of each frame is larger than a temperature difference threshold value between a starting frame and an ending frame of human body movement of a target person, and obtaining the maximum number of reaction pixels;
And obtaining the longitudinal distance characteristic of the target person based on the maximum temperature distribution variance and the maximum number of reaction pixels.
According to the indoor personnel falling detection method provided by the invention, the maximum temperature variance directly reflects the intensity of temperature change, so that the detection sensitivity of sudden actions is improved. The maximum pixel number characterizes the area size affected by the action, and the maximum temperature variance is combined to distinguish falling from other large-range actions (such as jumping), so that false alarms are reduced. The method solves the problem that the complex falling behaviors are difficult to describe by single-dimensional characteristics (such as speed or acceleration only) in the traditional method.
In an alternative embodiment, inputting the fusion feature set into the classification model to obtain a fall state determination result of the target person, including:
Different weight factors are distributed for each feature in the fusion feature set, so that fusion features with different weights are obtained;
and inputting the fusion characteristics with different weights into a classification model to obtain a falling state judgment result of the target personnel.
According to the indoor personnel falling detection method provided by the invention, the weight factors are dynamically adjusted according to the contribution degree of the features to falling recognition, so that the classification deviation problem caused by the equal weight processing of all the features in the traditional detection method is solved, and the attention degree of the model to key features is improved. The method can adapt to different environments or individual differences through weight adjustment, and the generalization of the algorithm is enhanced.
In an alternative embodiment, the classification model includes a K-nearest neighbor algorithm model or a neural network model.
According to the indoor personnel falling detection method provided by the invention, in a small sample scene (such as limited data volume of a family user), the falling mode is rapidly identified by KNN through similarity measurement, so that the limitation that a neural network needs a large amount of training data is avoided, and the deployment threshold is reduced. In a big data scene (such as multi-user data of a nursing home), the neural network captures a nonlinear mode of falling behaviors (such as implicit association of different falling postures) through deep feature extraction, so that the detection accuracy in a complex scene is improved. And selecting a model type (KNN is selected when the accuracy requirement is high) according to the application scene requirement, so as to meet the requirement of diversified users.
In an alternative embodiment, the infrared array sensor includes a top sensor set disposed on top of the target indoor space and a side sensor set disposed on a side of the target indoor space.
According to the indoor personnel falling detection method provided by the invention, the top sensor captures the vertical posture change (such as height dip in falling), the side sensor monitors the horizontal displacement (such as lateral falling track), and the multi-view fusion eliminates the blind area of the single sensor (such as the difficulty in detecting the ground-contacting action of the overlooking sensor). Through combined analysis of overlook and side view data, two-dimensional temperature distribution is mapped into three-dimensional space action tracks (such as falling angles and contact point positions), and geometric description capability of falling postures is enhanced. The multi-sensor can reduce the detection failure risk caused by shielding a single sensor (such as a furniture shielding side view sensor), and improve the reliability of the system.
In summary, the indoor personnel fall detection method provided by the invention forms a space complementation effect through the cooperative deployment of a plurality of groups of infrared array sensor groups in a data acquisition stage, a overlook sensor captures the vertical height dip (such as the downward movement of the center of gravity of a human body when falling), a side view sensor monitors a horizontal displacement track (such as a side fall path), multi-view data fusion provides a three-dimensional space modeling basis for the extraction of the subsequent transverse morphological characteristics and longitudinal distance characteristics, the problem of missed detection caused by a single sensor visual angle blind area is solved, a variance threshold triggering mechanism and the transverse and longitudinal characteristics are fused to form a time sequence-space association in a characteristic extraction stage, namely, the continuous frame temperature variance exceeds a threshold starting characteristic extraction, the effective action area is combined with the space-time parameter linkage of the maximum temperature variance to realize the accurate segmentation of an action starting point and a dynamic process, the defect of single time point misjudgment in a traditional scheme is avoided, and in a classification stage, a weight factor distribution and classification model are optionally formed into a dynamic adaptation closed loop, the transverse morphological characteristics and the longitudinal distance characteristics are dynamically adjusted by contribution degree weight classification boundary, and a KNN model (small sample scene) is combined with the elastic switching of a neural network (big data scene) to realize different adaptation algorithms.
In a second aspect, the invention provides an indoor personal fall detection device, the device comprising:
The system comprises an acquisition module, a temperature distribution module, a control module and a control module, wherein the acquisition module is used for acquiring temperature distribution data of a target person in a target indoor space, and the temperature distribution data is acquired through an infrared array sensor deployed in the target indoor space;
the system comprises an extraction module, a fusion feature set, a detection module and a control module, wherein the extraction module is used for extracting transverse morphological features and longitudinal distance features of target personnel based on temperature distribution data to obtain the fusion feature set;
and the judging module is used for inputting the fusion feature set into the classification model to obtain a falling state judging result of the target personnel.
In a third aspect, the invention provides a computer device comprising a memory and a processor, the memory and the processor being in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to thereby perform the indoor personal fall detection method of the first aspect or any of its corresponding embodiments.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the indoor personal fall detection method of the first aspect or any of its corresponding embodiments.
In a fifth aspect, the invention provides a computer program product comprising computer instructions for causing a computer to perform the indoor personal fall detection method of the first aspect or any of its corresponding embodiments described above.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for detecting a fall of an indoor person according to an embodiment of the present invention;
fig. 2 is a flow chart of a method for detecting fall of an indoor person according to an embodiment of the present invention;
fig. 3 is a block diagram of an indoor personal fall detection device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
To overcome the drawbacks of existing fall detection schemes, embodiments of the present invention provide an indoor personal fall detection method embodiment, it being noted that the steps shown in the flow chart of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different from that shown or described herein.
The flow of the method for detecting the falling of the indoor personnel in the embodiment is shown in fig. 1, and the method comprises the following steps:
S101, acquiring temperature distribution data of target personnel in a target indoor space, wherein the temperature distribution data is acquired through an infrared array sensor arranged in the target indoor space.
Specifically, an infrared array sensor (a non-contact thermal imaging device) deployed in the indoor space of the target is used for collecting heat radiation signals of a target person in real time, so as to generate a two-dimensional temperature distribution matrix. Each matrix unit (pixel point) corresponds to a temperature value of a certain position of the indoor space to form dynamic data reflecting the thermal distribution of the surface of the human body. The temperature distribution data is a low-resolution heat map (such as an 8 multiplied by 8 pixel array), and privacy problems possibly caused by high-definition imaging are avoided.
S102, based on temperature distribution data, extracting transverse morphological features and longitudinal distance features of the target personnel to obtain a fusion feature set, wherein the transverse morphological features are used for representing gesture changes of the target personnel, and the longitudinal distance features are used for representing movement trends of the target personnel.
Specifically, the transverse morphological feature is to extract parameters related to the spatial distribution of the human body posture from the temperature distribution data, and the feature is used for quantifying morphological changes (such as standing, tilting and falling) of the human body posture in the horizontal direction. For example, the geometry of the body contour (e.g., approximately rectangular area, aspect ratio), the spatial concentration of effective temperature pixels (e.g., centroid position offset). Longitudinal distance features are parameters related to the time-series evolution of the motion extracted from the temperature distribution data, and are used for characterizing the dynamic trend of the human motion in the time dimension (such as the difference between the slow sitting and fast falling speeds). For example, the speed of movement of the center point of the temperature distribution, and the diffusion rate of the hot region (such as local temperature field abrupt changes caused by the human body touching the ground in the falling process). And combining the transverse morphological characteristics (spatial distribution parameters) and the longitudinal distance characteristics (time sequence dynamic parameters) into a multidimensional vector to form a fusion characteristic set.
S103, inputting the fusion feature set into a classification model to obtain a falling state judgment result of the target person.
Specifically, a classification model constructed based on a machine learning algorithm (such as KNN, SVM or neural network) receives the fusion feature set as input, and outputs a two-classification result (fall/non-fall) by training and learning the feature difference between normal activities and fall behaviors. The classification model calculates probability according to the distribution position (such as the distance from the falling cluster) of the fusion feature set in the multidimensional feature space, and if the probability exceeds a preset threshold value, the probability is judged to be falling. For example, when the contour area suddenly increases (transverse morphology) and the velocity exceeds a critical value (longitudinal distance), the model decides to fall.
Optionally, in step S101, the infrared array sensor includes a top view sensor group disposed on top of the target indoor space and a side view sensor group disposed on a side of the target indoor space. The top sensor captures the vertical posture change (such as height dip in falling) of the human body, the side sensor monitors horizontal displacement (such as lateral falling track), and the multi-view fusion eliminates a single sensor blind area (such as a top sensor is difficult to detect ground-contacting action). Through combined analysis of overlook and side view data, two-dimensional temperature distribution is mapped into three-dimensional space action tracks (such as falling angles and contact point positions), and geometric description capability of falling postures is enhanced. The multi-sensor can reduce the detection failure risk caused by shielding a single sensor (such as a furniture shielding side view sensor), and improve the reliability of the system.
Optionally, before the step S102, a determination is further included as to whether to perform feature extraction, which specifically includes calculating a variance of temperature distribution data of consecutive frames, and extracting a transverse morphological feature and a longitudinal distance feature of the target person when it is detected that the variance of temperature distribution data of two or more consecutive frames exceeds a variance threshold. By setting the variance threshold, feature extraction (such as intense human body action) is started only when significant temperature change is detected, false triggering caused by small fluctuation of the ambient temperature (such as air-conditioning airflow and pet movement) is avoided, and the false alarm rate of the system is remarkably reduced. The continuous multi-frame judgment (two frames or more) can filter transient noise interference (such as transient heat source interference) and improve the reliability of feature extraction. Feature extraction is performed only when an active action occurs, reducing the amount of invalid data processing.
Optionally, the step S102 includes extraction of the transverse morphology feature and the longitudinal morphology feature.
The method comprises the steps of obtaining the number of temperature pixels exceeding a preset room temperature threshold value in temperature distribution data of each frame, comparing the number with a total frame number to obtain the average number of effective pixels, conducting contour analysis on active pixels in continuous activated frames to extract morphological characteristics, determining the effective action area of human body actions of a target person based on the morphological characteristics, and obtaining the transverse morphological characteristics of the target person based on the average number of the effective pixels and the effective action area, wherein the continuous activated frames are frame sequences between a starting frame and a final frame, the starting frame is a frame in which the variance of the temperature distribution data of two continuous frames exceeds a variance threshold value, and the final frame is a frame in which the variance of the temperature distribution data of two continuous frames is lower than the variance threshold value.
In the above steps, the extraction of the effective action area includes the human body contour area and the centroid offset, wherein the contour area is obtained by binarizing the temperature distribution matrix and counting the number of the effective pixel points, and the centroid offset is obtained by calculating the distance between the geometric center of the effective pixel point area and the reference position. The contour area is obtained by a contour analysis method, the contour analysis is to perform binarization processing on a temperature distribution matrix, extract an effective pixel point area and calculate the projection area of the human body by using the number of pixel points and the actual area of unit pixels.
And extracting longitudinal distance characteristics, including the moving speed of the temperature distribution center point, calculating the moving speed through the ratio of the distance between the adjacent frame center points in the sliding time window to the time interval, and carrying out moving average filtering processing. The method comprises the steps of obtaining maximum temperature distribution variance according to the maximum value of variances of temperature distribution data of each frame, obtaining the number of pixels when the variances of the temperature distribution data of each frame are larger than a temperature difference threshold between a starting frame and an ending frame of human body movement of a target person, obtaining the maximum number of pixels, and obtaining longitudinal distance characteristics of the target person based on the maximum temperature distribution variance and the maximum number of reaction pixels.
In the extracted features, the average effective pixel number is defined as the total number of temperature pixels exceeding a room temperature threshold divided by the total frame number, namely the number of single-frame average pixels exceeding the room temperature threshold, the activated frame represents the duration of motion, if no person exists in the indoor environment, the characteristic value is very small and basically unchanged, but when the person exists and moves, the characteristic value becomes large, the maximum number of reaction pixels represents the maximum pixel number between a starting frame and an ending frame of the movement of the human body, the variance of the single-pixel number of each frame is larger than the maximum pixel number when the temperature difference threshold is met, the characteristic represents the number of pixels acquired by the corresponding motion under the infrared sensor, the maximum temperature distribution variance represents the maximum temperature distribution variance between the starting frame and the ending frame of the movement of the human body, the characteristic represents the change degree of motion trend, the effective motion area is obtained by forming the approximate contour of the motion of the human body into a regular rectangle, and the approximate motion area is calculated to be the effective motion area. The number of active pixels exceeding the number of active pixels in each frame in the active frame divided by the number of pixels per square can be measured to obtain the approximate area where the actual action acquired on the frame occurs, the data feature can be used as a training feature and a verification data verification algorithm to judge whether the actual action occurs or not, and the effective action area is compared with the actual estimated action area to judge which action occurs.
Optionally, in the step S103, different weight factors are firstly allocated to each feature in the fusion feature set to obtain fusion features with different weights, and then the fusion features with different weights are input into the classification model to obtain the falling state judgment result of the target person. The weight factors are dynamically adjusted according to the contribution degree of the features to the falling recognition, so that the classification deviation problem caused by weight processing of all the features in the traditional detection method is solved, and the attention degree of the model to the key features is improved. The method can adapt to different environments or individual differences through weight adjustment, and the generalization of the algorithm is enhanced.
In the above step, the classification model may adopt a K nearest neighbor algorithm model or a neural network model. In a small sample scene (such as limited data volume of home users), the KNN can rapidly identify the falling mode through the similarity measurement, so that the limitation that the neural network needs a large amount of training data is avoided, and the deployment threshold is reduced. In a big data scene (such as multi-user data of a nursing home), the neural network captures a nonlinear mode of falling behaviors (such as implicit association of different falling postures) through deep feature extraction, so that the detection accuracy in a complex scene is improved. And selecting a model type (KNN is selected when the accuracy requirement is high) according to the application scene requirement, so as to meet the requirement of diversified users.
In summary, the indoor personnel fall detection method provided by the embodiment of the invention forms a space complementation effect through the collaborative deployment of a plurality of groups of infrared array sensor groups in a data acquisition stage, a overlook sensor captures the height dip in the vertical direction (such as the downward movement of the center of gravity of a human body when falling), a side view sensor monitors a horizontal displacement track (such as a side fall path), multi-view data fusion provides a three-dimensional space modeling basis for the extraction of the subsequent transverse morphological characteristics and longitudinal distance characteristics, the problem of missed detection caused by the visual angle blind area of a single sensor is solved, a variance threshold triggering mechanism and the transverse and longitudinal characteristics are fused to form a time sequence-space association in a characteristic extraction stage, namely, the space-time parameter linkage of an effective action area and a maximum temperature variance is combined through the continuous frame temperature variance over-threshold starting characteristic extraction, the accurate segmentation of an action starting point and a dynamic process is realized, the defect of misjudgment of a single time point in the traditional scheme is avoided, in a classification stage, a weight factor distribution and classification model selectivity form a dynamic adaptation closed loop, the transverse morphological characteristics and longitudinal distance characteristics are dynamically adjusted by contribution degree weight, and a KNN model (small sample) are combined, and a flexible scene is switched to realize different adaptation algorithms of a scene.
The flow of the method for detecting a fall of an indoor person is described below with a specific example, and the flow of the method of this example is shown in fig. 2, and includes the following steps.
Firstly, acquiring array data of each frame through infrared array sensors arranged at the indoor top and side surfaces, and then carrying out maximum variance operation on temperature pixel points. If the variance of the temperature distribution of two or more continuous frames is larger than the threshold value of the temperature variance, the sensitive data features of the human body posture, including the average effective pixel point number, the maximum number of activated frames and reaction pixels, the maximum variance of the temperature distribution, the effective action area and the like, are extracted. And finally, preprocessing the extracted characteristic data according to the weight factors, and then carrying out KNN classification to judge whether falling action occurs. In addition to the KNN algorithm, other learning algorithms with the same classification effect, such as a random forest algorithm, etc., may be used.
The infrared technology taking the target human body as a source is adopted to replace the microwave radar technology, the low-resolution camera with strong privacy is utilized to collect human body information in real time, the multidimensional parameter fusion means is used for extracting the characteristics, and the artificial intelligent algorithm is utilized to identify and judge the extracted characteristics, so that the accurate falling monitoring of the target human body is realized. The contribution degree of each dimension feature to fall identification is marked to be different. In the feature space of the classification algorithm, a specific gravity factor corresponding to the falling contribution degree is given when the distance between each feature parameter and the nearest neighbor is measured, so that the adaptability optimization of the classification algorithm model in a multi-dimensional fusion scene is completed.
In this embodiment, an apparatus for detecting a fall of an indoor person is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The structure of the fall detection device for indoor personnel provided in this embodiment is shown in fig. 3, and includes:
the acquisition module 301 is configured to acquire temperature distribution data of a target person in a target indoor space, where the temperature distribution data is acquired by an infrared array sensor deployed in the target indoor space;
The extraction module 302 is used for extracting the transverse morphological characteristics and the longitudinal distance characteristics of the target personnel based on the temperature distribution data to obtain a fusion characteristic set, wherein the transverse morphological characteristics are used for representing the posture change of the target personnel;
the judging module 303 is configured to input the fusion feature set into the classification model, and obtain a falling state judgment result of the target person.
In an alternative embodiment, the apparatus further comprises:
And a threshold value checking module 304 for calculating the variance of the temperature distribution data of the continuous frames, and extracting the transverse morphological characteristics and the longitudinal distance characteristics of the target personnel when detecting that the variance of the temperature distribution data of two or more continuous frames exceeds the variance threshold value.
In an alternative embodiment, the extracting module 302 is specifically configured to:
acquiring the number of temperature pixels exceeding a preset room temperature threshold value in each frame of temperature distribution data, and comparing the number with the total frame number to obtain the average effective number of pixels;
Performing contour analysis on active pixel points in the continuous activated frames to extract morphological characteristics;
Determining the effective action area of the human body action of the target person based on the morphological characteristics;
Based on the average effective pixel point number and the effective action area, obtaining the transverse morphological characteristics of the target personnel;
The continuous activation frame is a frame sequence between a start frame and a stop frame, wherein the start frame is a frame in which the variance of the temperature distribution data of two continuous frames is detected to exceed a variance threshold for the first time, and the stop frame is a frame in which the variance of the temperature distribution data of two continuous frames is detected to be lower than the variance threshold for the first time.
In an alternative embodiment, the extracting module 302 is specifically configured to:
Obtaining the maximum temperature distribution variance according to the maximum value in the variances of the temperature distribution data of each frame;
Acquiring the number of pixels when the variance of the temperature distribution data of each frame is larger than a temperature difference threshold value between a starting frame and an ending frame of human body movement of a target person, and obtaining the maximum number of pixels;
And obtaining the longitudinal distance characteristic of the target person based on the maximum temperature distribution variance and the maximum number of reaction pixels.
In an alternative embodiment, the determining module 303 is specifically configured to:
Different weight factors are distributed for each feature in the fusion feature set, so that fusion features with different weights are obtained;
and inputting the fusion characteristics with different weights into a classification model to obtain a falling state judgment result of the target personnel.
In an alternative embodiment, the classification model includes a K-nearest neighbor algorithm model or a neural network model.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The fall detection device for indoor personnel in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC (Application SPECIFIC INTEGRATED Circuit) Circuit, a processor and a memory that execute one or more software or firmware programs, and/or other devices that can provide the above functions.
The embodiment of the invention also provides computer equipment, which is provided with the indoor personnel falling detection device shown in the figure 3.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, and as shown in fig. 4, the computer device includes one or more processors 10, a memory 20, and interfaces for connecting components, including a high-speed interface and a low-speed interface. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 4.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 20 may comprise volatile memory, such as random access memory, or nonvolatile memory, such as flash memory, hard disk or solid state disk, or the memory 20 may comprise a combination of the above types of memory.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example in fig. 4.
The input device 30 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random-access memory, a flash memory, a hard disk, a solid state disk, or the like, and further, the storage medium may further include a combination of the above types of memories. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Portions of the present invention may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or aspects in accordance with the present invention by way of operation of the computer. Those skilled in the art will appreciate that the existence of computer program instructions in a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and accordingly, the manner in which computer program instructions are executed by a computer includes, but is not limited to, the computer directly executing the instructions, or the computer compiling the instructions and then executing the corresponding compiled programs, or the computer reading and executing the instructions, or the computer reading and installing the instructions and then executing the corresponding installed programs. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (10)

1.一种室内人员跌倒检测方法,其特征在于,所述方法包括:1. A method for detecting indoor person falling, characterized in that the method comprises: 获取目标室内空间的目标人员的温度分布数据;所述温度分布数据通过部署在目标室内空间的红外阵列传感器采集得到;Acquire temperature distribution data of a target person in a target indoor space; the temperature distribution data is collected by an infrared array sensor deployed in the target indoor space; 基于所述温度分布数据,提取目标人员的横向形态特征和纵向距离特征,得到融合特征集;所述横向形态特征用于表征目标人员的姿态变化;所述纵向距离特征用于表征目标人员的运动趋势;Based on the temperature distribution data, the transverse morphological features and the longitudinal distance features of the target person are extracted to obtain a fusion feature set; the transverse morphological features are used to characterize the posture changes of the target person; and the longitudinal distance features are used to characterize the movement trend of the target person; 将所述融合特征集输入分类模型,得到目标人员的跌倒状态判定结果。The fused feature set is input into a classification model to obtain a fall status determination result of the target person. 2.根据权利要求1所述的方法,其特征在于,所述提取目标人员的横向形态特征和纵向距离特征之前,还包括:2. The method according to claim 1, characterized in that before extracting the horizontal morphological features and the vertical distance features of the target person, it also includes: 计算连续帧的温度分布数据的方差;Calculate the variance of temperature distribution data of consecutive frames; 当检测到连续两帧或两帧以上的温度分布数据的方差超过方差阈值时,提取目标人员的横向形态特征和纵向距离特征。When it is detected that the variance of temperature distribution data of two or more consecutive frames exceeds the variance threshold, the lateral morphological features and longitudinal distance features of the target person are extracted. 3.根据权利要求2所述的方法,其特征在于,提取目标人员的横向形态特征,包括:3. The method according to claim 2, characterized in that extracting the lateral morphological features of the target person comprises: 获取每帧温度分布数据中超过预设室温阈值的温度像素点数量,并与总帧数进行比较,得到平均有效像素点数量;Obtain the number of temperature pixels exceeding the preset room temperature threshold in each frame of temperature distribution data, and compare it with the total number of frames to obtain the average number of valid pixels; 对连续激活帧中的活动像素点进行轮廓分析以提取形态特征;Perform contour analysis on active pixels in consecutive activation frames to extract morphological features; 基于所述形态特征确定目标人员人体动作的有效动作面积;Determine the effective action area of the target person's human body action based on the morphological characteristics; 基于所述平均有效像素点数量和所述有效动作面积,得到目标人员的横向形态特征;Based on the average number of effective pixels and the effective action area, a lateral morphological feature of the target person is obtained; 其中,所述连续激活帧为起始帧至终止帧之间的帧序列;所述起始帧为首次检测到连续两帧温度分布数据的方差超过方差阈值的帧;所述终止帧为首次检测到连续两帧温度分布数据的方差低于方差阈值的帧。Among them, the continuous activation frame is a frame sequence between a start frame and an end frame; the start frame is a frame where the variance of two consecutive frames of temperature distribution data is detected for the first time to exceed the variance threshold; the end frame is a frame where the variance of two consecutive frames of temperature distribution data is detected for the first time to be lower than the variance threshold. 4.根据权利要求3所述的方法,其特征在于,提取目标人员的纵向距离特征,包括:4. The method according to claim 3, characterized in that extracting the longitudinal distance feature of the target person comprises: 根据每一帧温度分布数据的方差中的最大值,得到最大温度分布方差;According to the maximum value of the variance of each frame of temperature distribution data, the maximum temperature distribution variance is obtained; 在目标人员进行人体移动的开始帧和结束帧之间,获取每帧的温度分布数据的方差大于温差阈值时的像素点数量,得到反应像素最大数量;Between the start frame and the end frame of the target person's body movement, the number of pixel points when the variance of the temperature distribution data of each frame is greater than the temperature difference threshold is obtained to obtain the maximum number of reaction pixels; 基于所述最大温度分布方差和所述反应像素最大数量,得到目标人员的纵向距离特征。Based on the maximum temperature distribution variance and the maximum number of reaction pixels, the longitudinal distance feature of the target person is obtained. 5.根据权利要求4所述的方法,其特征在于,所述将所述融合特征集输入分类模型,得到目标人员的跌倒状态判定结果,包括:5. The method according to claim 4, characterized in that the step of inputting the fusion feature set into a classification model to obtain a fall state determination result of the target person comprises: 为所述融合特征集中各特征分配不同的权重因子,得到不同权重的融合特征;Assigning different weight factors to each feature in the fusion feature set to obtain fusion features with different weights; 将不同权重的融合特征输入分类模型,得到目标人员的跌倒状态判定结果。The fusion features with different weights are input into the classification model to obtain the fall status determination result of the target person. 6.根据权利要求5所述的方法,其特征在于,所述分类模型包括K最近邻算法模型或神经网络模型。6. The method according to claim 5 is characterized in that the classification model includes a K nearest neighbor algorithm model or a neural network model. 7.根据权利要求1至6任一项所述的方法,其特征在于,所述红外阵列传感器包括部署在目标室内空间顶部的俯视传感器组和部署在目标室内空间侧面的侧视传感器组。7. The method according to any one of claims 1 to 6, characterized in that the infrared array sensor includes a top-view sensor group deployed on the top of the target indoor space and a side-view sensor group deployed on the side of the target indoor space. 8.一种室内人员跌倒检测装置,其特征在于,所述装置包括:8. An indoor person fall detection device, characterized in that the device comprises: 获取模块,用于获取目标室内空间的目标人员的温度分布数据;所述温度分布数据通过部署在目标室内空间的红外阵列传感器采集得到;An acquisition module is used to acquire temperature distribution data of a target person in a target indoor space; the temperature distribution data is acquired by an infrared array sensor deployed in the target indoor space; 提取模块,用于基于所述温度分布数据,提取目标人员的横向形态特征和纵向距离特征,得到融合特征集;所述横向形态特征用于表征目标人员的姿态变化;所述纵向距离特征用于表征目标人员的运动趋势;An extraction module, used to extract the transverse morphological features and longitudinal distance features of the target person based on the temperature distribution data to obtain a fusion feature set; the transverse morphological features are used to characterize the posture changes of the target person; the longitudinal distance features are used to characterize the movement trend of the target person; 判断模块,用于将所述融合特征集输入分类模型,得到目标人员的跌倒状态判定结果。The judgment module is used to input the fused feature set into the classification model to obtain the fall status judgment result of the target person. 9.一种计算机设备,其特征在于,包括:存储器和处理器,所述存储器和所述处理器之间互相通信连接,所述存储器中存储有计算机指令,所述处理器通过执行所述计算机指令,从而执行权利要求1至7中任一项所述的室内人员跌倒检测方法。9. A computer device, characterized in that it comprises: a memory and a processor, the memory and the processor are communicatively connected to each other, the memory stores computer instructions, and the processor executes the indoor person fall detection method according to any one of claims 1 to 7 by executing the computer instructions. 10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机指令,所述计算机指令用于使计算机执行权利要求1至7中任一项所述的室内人员跌倒检测方法。10. A computer-readable storage medium, characterized in that computer instructions are stored on the computer-readable storage medium, and the computer instructions are used to enable a computer to execute the indoor person fall detection method according to any one of claims 1 to 7.
CN202510624570.6A 2025-05-15 2025-05-15 Indoor person fall detection method, device, equipment and storage medium Active CN120131005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510624570.6A CN120131005B (en) 2025-05-15 2025-05-15 Indoor person fall detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510624570.6A CN120131005B (en) 2025-05-15 2025-05-15 Indoor person fall detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN120131005A true CN120131005A (en) 2025-06-13
CN120131005B CN120131005B (en) 2025-09-16

Family

ID=95953001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510624570.6A Active CN120131005B (en) 2025-05-15 2025-05-15 Indoor person fall detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN120131005B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6059899A (en) * 1996-06-28 2000-05-09 Toyota Jidosha Kabushiki Kaisha Press-formed article and method for strengthening the same
JP2007216381A (en) * 2004-07-13 2007-08-30 Matsushita Electric Ind Co Ltd robot
US20150173675A1 (en) * 2013-12-25 2015-06-25 Seiko Epson Corporation Biometric information detecting apparatus
US20190099759A1 (en) * 2016-05-18 2019-04-04 Nippon Sheet Glass Company, Limited Reaction treatment device and method for controlling reaction treatment device
US20190217864A1 (en) * 2016-09-13 2019-07-18 Panasonic Intellectual Property Management Co., Ltd. Road surface condition prediction system, driving assistance system, road surface condition prediction method, and data distribution method
CN112580403A (en) * 2019-09-29 2021-03-30 北京信息科技大学 Time-frequency feature extraction method for fall detection
CN112613388A (en) * 2020-12-18 2021-04-06 燕山大学 Personnel falling detection method based on multi-dimensional feature fusion
CN112784890A (en) * 2021-01-14 2021-05-11 泰康保险集团股份有限公司 Human body falling detection method and system by means of multi-sensor fusion
WO2022041484A1 (en) * 2020-08-26 2022-03-03 歌尔股份有限公司 Human body fall detection method, apparatus and device, and storage medium
US20220221344A1 (en) * 2020-03-06 2022-07-14 Butlr Technologies, Inc Pose detection using thermal data
CN114863471A (en) * 2022-03-25 2022-08-05 南京邮电大学 A multi-feature fusion human fall detection method and system
US20220355453A1 (en) * 2021-05-10 2022-11-10 Max Co., Ltd. Driving tool
CN116184396A (en) * 2023-03-07 2023-05-30 重庆邮电大学 Feature Fusion Human Fall Detection Method Based on Lightweight Network
WO2024103682A1 (en) * 2022-11-14 2024-05-23 天地伟业技术有限公司 Fall behavior identification method based on video classification and electronic device
CN118533303A (en) * 2024-03-20 2024-08-23 国科大杭州高等研究院 A temperature error self-correction system for high-precision temperature indicator blackbody

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6059899A (en) * 1996-06-28 2000-05-09 Toyota Jidosha Kabushiki Kaisha Press-formed article and method for strengthening the same
JP2007216381A (en) * 2004-07-13 2007-08-30 Matsushita Electric Ind Co Ltd robot
US20150173675A1 (en) * 2013-12-25 2015-06-25 Seiko Epson Corporation Biometric information detecting apparatus
US20190099759A1 (en) * 2016-05-18 2019-04-04 Nippon Sheet Glass Company, Limited Reaction treatment device and method for controlling reaction treatment device
US20190217864A1 (en) * 2016-09-13 2019-07-18 Panasonic Intellectual Property Management Co., Ltd. Road surface condition prediction system, driving assistance system, road surface condition prediction method, and data distribution method
CN112580403A (en) * 2019-09-29 2021-03-30 北京信息科技大学 Time-frequency feature extraction method for fall detection
US20220221344A1 (en) * 2020-03-06 2022-07-14 Butlr Technologies, Inc Pose detection using thermal data
WO2022041484A1 (en) * 2020-08-26 2022-03-03 歌尔股份有限公司 Human body fall detection method, apparatus and device, and storage medium
CN112613388A (en) * 2020-12-18 2021-04-06 燕山大学 Personnel falling detection method based on multi-dimensional feature fusion
CN112784890A (en) * 2021-01-14 2021-05-11 泰康保险集团股份有限公司 Human body falling detection method and system by means of multi-sensor fusion
US20220355453A1 (en) * 2021-05-10 2022-11-10 Max Co., Ltd. Driving tool
CN114863471A (en) * 2022-03-25 2022-08-05 南京邮电大学 A multi-feature fusion human fall detection method and system
WO2024103682A1 (en) * 2022-11-14 2024-05-23 天地伟业技术有限公司 Fall behavior identification method based on video classification and electronic device
CN116184396A (en) * 2023-03-07 2023-05-30 重庆邮电大学 Feature Fusion Human Fall Detection Method Based on Lightweight Network
CN118533303A (en) * 2024-03-20 2024-08-23 国科大杭州高等研究院 A temperature error self-correction system for high-precision temperature indicator blackbody

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余静;彭晓东;谢文明;覃润楠;王有亮: "基于数据生成和深度神经网络的空间非合作目标行为意图识别", 空间科学学报, vol. 44, no. 006, 31 December 2024 (2024-12-31) *
张修峰;杨立;吴猛猛;杜永成;: "温度梯度环境中潜艇尾流热特征的试验研究", 实验流体力学, no. 02, 15 April 2011 (2011-04-15) *
李伟彤;林凡等: "融合异构传感器数据的室内跌倒检测技术研发", 2022-2023年度广东省登记成果, 30 November 2023 (2023-11-30) *

Also Published As

Publication number Publication date
CN120131005B (en) 2025-09-16

Similar Documents

Publication Publication Date Title
US9122917B2 (en) Recognizing gestures captured by video
Li et al. Fall detection for elderly person care using convolutional neural networks
Mastorakis et al. Fall detection system using Kinect’s infrared sensor
US9208580B2 (en) Hand detection, location, and/or tracking
JP6333844B2 (en) Resource allocation for machine learning
JP6822328B2 (en) Watching support system and its control method
US9674447B2 (en) Apparatus and method for adaptive computer-aided diagnosis
RU2679864C2 (en) Patient monitoring system and method
CN103376890A (en) Gesture remote control system based on vision
CN109791615A (en) For detecting and tracking the method, target object tracking equipment and computer program product of target object
Planinc et al. Robust fall detection by combining 3D data and fuzzy logic
CN101930540A (en) Video-based multi-feature fusion flame detecting device and method
JP2011209794A (en) Object recognition system, monitoring system using the same, and watching system
Kong et al. A skeleton analysis based fall detection method using tof camera
KR20120026956A (en) Method and apparatus for motion recognition
Bhattacharya et al. Arrays of single pixel time-of-flight sensors for privacy preserving tracking and coarse pose estimation
Nguyen et al. Extracting silhouette-based characteristics for human gait analysis using one camera
US8913129B2 (en) Method and system of progressive analysis for assessment of occluded data and redundant analysis for confidence efficacy of non-occluded data
CN120131005B (en) Indoor person fall detection method, device, equipment and storage medium
JP2011198244A (en) Object recognition system, monitoring system using the same, and watching system
Appiah et al. Human behavioural analysis with self-organizing map for ambient assisted living
CN118379800B (en) Human body falling detection method, device, equipment and storage medium under shielding condition
Zaghden et al. Integrating Attention Mechanisms in YOLOv8 for Improved Fall Detection Performance.
CN119649500A (en) Intelligent door lock unlocking method, device, computer equipment and storage medium
CN118522086A (en) Identification method, device, equipment and medium of intelligent door lock and intelligent door lock

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant