Disclosure of Invention
The invention aims to provide a health state assessment method and system, terminal equipment and a storage medium, so as to solve the problem that a user cannot accurately know the health condition of the user on the whole in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a health state assessment method, comprising:
taking a plurality of blood pressure detection results, heart rate detection results, emotion detection results, vitality detection results, physical strength detection results, age detection results, skin moisture detection results and skin oil dryness detection results as physiological health evaluation indexes, distributing weights to the physiological health evaluation indexes, and comprehensively evaluating the physiological health state index L according to the weights;
wherein the weight assigned to the blood pressure detection result and the heart rate detection result is greater than the weight assigned to other physiological health evaluation indexes.
Optionally, the method for comprehensively evaluating the physiological health state index L includes:
when the heart rate detection result and the blood pressure detection result are both in a normal range or the heart rate detection result and the blood pressure detection result are absent, all the physiological health evaluation indexes are used as input parameters, a first physiological health index L1 is obtained through a physiological health state evaluation model, and the physiological health state index L is obtained according to the first physiological health index L1;
and/or when the heart rate detection result and the blood pressure detection result both exceed a normal range, obtaining a second physiological health index L2 according to the heart rate detection result, and obtaining a third physiological health index L3 according to the blood pressure detection result; obtaining a first physiological health index L1 through a physiological health state evaluation model by taking other physiological health evaluation indexes except the heart rate detection result and the blood pressure detection result as input parameters; obtaining the physiological health state index L according to the first physiological health index L1, the second physiological health index L2 and the third physiological health index L3;
and/or when the heart rate detection result exceeds a normal range, and the blood pressure detection result is in a normal range or lacks the blood pressure detection result, obtaining a second physiological health index L2 according to the heart rate detection result; obtaining a first physiological health index L1 through a physiological health state evaluation model by taking other physiological health evaluation indexes except the heart rate detection result as input parameters; obtaining the physiological health state index L according to the first physiological health index L1 and the second physiological health index L2;
and/or when the blood pressure detection result exceeds a normal range, and the heart rate detection result is in a normal range or lacks the heart rate detection result, obtaining a third physiological health index L3 according to the blood pressure detection result; obtaining a first physiological health index L1 through a physiological health state evaluation model by taking other physiological health evaluation indexes except the blood pressure detection result as input parameters; obtaining the physiological health state index L according to the first physiological health index L1 and the third physiological health index L3.
Optionally, the health status evaluation method further includes:
taking a plurality of emotion detection results, vitality detection results, physical strength detection results, age detection results, skin moisture detection results, skin dryness detection results, character detection results and mood detection results as mental health evaluation indexes, and obtaining a first mental health index M1 through a mental health state evaluation model;
obtaining a mental health state index M according to the first mental health index M1 and the physiological health state index L.
Optionally, the method for obtaining the blood pressure detection result and the heart rate detection result includes:
acquiring a face video and a finger tip video of a user;
sequentially carrying out face identification and tracking, skin pixel detection and face region segmentation on the face video to obtain face information, and analyzing the face information to obtain a face pulse wave signal;
acquiring a fingertip pulse wave signal through the fingertip video;
obtaining the heart rate detection result through a heart rate algorithm according to the facial pulse wave signal and the fingertip pulse wave signal;
extracting a multi-dimensional feature set according to the facial pulse wave signal and the fingertip pulse wave signal, wherein the multi-dimensional feature set comprises a pulse wave conduction feature set, a pulse wave morphological feature set and a pulse wave semantic feature set;
and performing optimal feature subset selection on the multi-dimensional feature set to construct a blood pressure estimation model and establish a regression model, and calculating to obtain the blood pressure detection result.
Optionally, the method for obtaining an age detection result includes: the method comprises the steps of obtaining a facial image of a user, identifying a face area, cutting a face to obtain complete face data, inputting the face data into an age detection model, and obtaining an age detection result.
Optionally, the method for obtaining the physical strength detection result includes:
acquiring a face image of a user;
calling a human face feature point extraction model, and extracting 68 feature points of a human face from face data of the face image;
extracting multi-dimensional face features based on the 68 feature points, wherein the multi-dimensional face features at least comprise face features, distribution features of five sense organs and shape features of the five sense organs;
performing optimal feature subset selection on the extracted multi-dimensional face features to obtain an optimal feature subset;
training a random forest model based on the optimal feature subset, and performing visualization through combination of different hyper-parameters to perform artificial modulation; simultaneously, different hyper-parameters are matched to obtain a final random forest model;
and performing physical strength detection according to the final random forest model and the age detection result to obtain the physical strength detection result.
Optionally, the method for obtaining the skin dryness detection result and the skin moisture detection result includes:
acquiring a face image of a user, and acquiring a skin image of a face region from the face image;
converting the skin image into an index image, and extracting oil-light characteristics from the index image;
inputting the oil-light characteristics into a skin oil dryness detection model, and combining the age detection result to obtain the skin oil dryness detection result;
and extracting the characteristic features of the moisture from the index image by combining the skin dryness detection result, and inputting the characteristic features of the moisture into a skin moisture detection model to obtain the skin moisture detection result.
A server, comprising: a memory for storing a computer program and a processor; the processor is adapted to execute the health status assessment method as defined in any of the above when the computer program is invoked.
A health state evaluation system comprises a server and terminal equipment;
the terminal equipment is used for acquiring a face image, a face video and a finger end video of a user, sending the face image, the face video and the finger end video to the server, and receiving and displaying various detection results and health state evaluation results returned by the server;
the server is used for detecting various health parameters according to the face image, the face video and the finger video to obtain various detection results, performing health evaluation according to the health state evaluation method, and returning various detection results and health evaluation results to the terminal equipment.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a state of health assessment method as set forth in any one of the preceding claims.
Compared with the prior art, the invention has the beneficial effects that:
according to the embodiment of the invention, the physiological health evaluation indexes are obtained by carrying out comprehensive analysis based on diversified detection results, and different weights are distributed to different evaluation indexes during analysis, so that the health state of the user can be comprehensively and accurately reflected on the whole.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the invention provides a health status evaluation method, including:
step 101, obtaining a plurality of results of a blood pressure detection result, a heart rate detection result, an emotion detection result, an vitality detection result, a physical strength detection result, an age detection result, a skin moisture detection result and a skin dryness detection result.
And 102, taking the multiple detection results obtained in the step 101 as physiological health evaluation indexes, and distributing weights to the physiological health evaluation indexes.
Compared with other physiological health evaluation indexes, the heart rate detection result and the blood pressure detection result more directly and accurately reflect the physiological health degree. Therefore, in the operation of assigning the weight, the weight assigned to the blood pressure detection result and the heart rate detection result is required to be larger than the weight assigned to the other physiological health evaluation index.
And 103, comprehensively evaluating each physiological health evaluation index based on the distributed weight to obtain a physiological health state index L.
In the method, the physiological health evaluation indexes are obtained by comprehensively analyzing based on diversified detection results, and different weights are distributed to different evaluation indexes during analysis, so that the health state of the user can be comprehensively and accurately reflected on the whole.
In an alternative embodiment, the method for comprehensively evaluating the physiological health status index L in step 103 may include:
under the condition that the heart rate detection result and the blood pressure detection result are both in a normal range or lack of the heart rate detection result and the blood pressure detection result:
obtaining a first physiological health index L1 through a physiological health state evaluation model by taking all physiological health evaluation indexes as input parameters;
the physiological health state index L is obtained from the first physiological health index L1.
Secondly, under the condition that the heart rate detection result and the blood pressure detection result both exceed the normal range:
obtaining a second physiological health index L2 according to the heart rate detection result, and obtaining a third physiological health index L3 according to the blood pressure detection result;
obtaining a first physiological health index L1 through a physiological health state evaluation model by taking other physiological health evaluation indexes except the heart rate detection result and the blood pressure detection result as input parameters;
the physiological health state index L is obtained according to the first physiological health index L1, the second physiological health index L2 and the third physiological health index L3.
And thirdly, when the heart rate detection result exceeds a normal range and the blood pressure detection result is in the normal range or lacks the blood pressure detection result:
obtaining a second physiological health index L2 according to the heart rate detection result;
obtaining a first physiological health index L1 through a physiological health state evaluation model by taking other physiological health evaluation indexes except the heart rate detection result as input parameters;
obtaining a physiological health state index L according to the first physiological health index L1 and the second physiological health index L2.
And fourthly, under the condition that the blood pressure detection result exceeds the normal range and the heart rate detection result is in the normal range or lacks the heart rate detection result:
obtaining a third physiological health index L3 according to the blood pressure detection result;
obtaining a first physiological health index L1 through a physiological health state evaluation model by taking other physiological health evaluation indexes except the blood pressure detection result as input parameters;
obtaining a physiological health state index L according to the first physiological health index L1 and the third physiological health index L3.
It should be noted that the normal range of the present embodiment can be defined in a conventional manner, for example, the normal range of the heart rate can be defined as 60-100 times/min; based on the above, when the heart rate detection result exceeds 100 times/minute, the heart rate is determined to be 'heart rate overspeed', and the heart rate is determined to be 'out of the normal range'; when the heart rate detection result is less than 60 times/minute, the heart rate is determined to be too slow, and the heart rate is determined to be out of the normal range.
Fig. 2 shows an example in which L is L1 in the first case, L1+ L2+ L3 in the second case, L1+ L2 in the third case, and L1+ L3 in the fourth case. When the input source of the physiological health state evaluation model includes the blood pressure detection result and the heart rate detection result, the value range of L1 may be (-0.5, 0, 0.5, 1, 2); when the input source of the physiological health state evaluation model lacks a blood pressure detection result or a heart rate detection result, the value range of the L1 can be reduced to (-0.5, 0, 0.5).
Generally, a positive value of the physiological health state index L indicates a higher physiological health degree, and a negative value of the physiological health state index L indicates a lower physiological health degree. Furthermore, based on the repeated test results, a proper critical value of health and sub-health can be selected to provide comparison reference for the user, so that the user can know the self health degree more clearly.
Referring to fig. 3 and 4, another health status assessment method is provided according to an embodiment of the present invention, which adds the following steps to the method shown in fig. 1:
and 104, acquiring a plurality of items of emotion detection results, vitality detection results, physical strength detection results, age detection results, skin moisture detection results, skin dryness detection results, character detection results and mood detection results.
And 105, taking the multiple detection results obtained in the step 104 as mental health evaluation indexes, and obtaining a first mental health index M1 through a mental health state evaluation model.
For example, the value range of the first mental health index M1 may be defined as (-0.5, 0, 0.5, 1, 2).
And step 106, obtaining a mental health state index M according to the first mental health index M1 and the physiological health state index L.
In this step, different weights may be assigned according to the degree of influence of the first mental health index M1 and the physiological health state index L on the mental health state, an exemplary M1+ L0.5. In fact, the weight of the physiological health status index L may be selected from other suitable values, such as 0.6, 0.55, etc., according to the bias of the actual functional plan of the product, through the statistical result of the experimental samples.
Similarly, generally, a positive value of the mental health state index M indicates a higher degree of mental health, and a negative value of the mental health state index M indicates a lower degree of mental health.
On the basis of realizing the physiological health state assessment, the mental health state assessment is also realized, so that the user can know the health condition of the user in multiple directions, and the assessment result is more comprehensive.
Hereinafter, the detection methods of the health parameters will be described in detail.
(1) Method for detecting emotion, character, vitality and mood
In the embodiment, deep learning is used for processing estimation problems of emotion, character, vitality, mood and the like of a static face image, a used deep learning model is a Convolutional Neural Network (CNN), an Xception algorithm is adopted in the module, the algorithm is a class of improved algorithms for an inclusion v3 architecture, and depthwise partial constraint is mainly adopted to replace convolution operation in an original inclusion v 3. After the training is completed to obtain a basic frame, the face images with different emotions, characters, vitality and moods, which are obtained in a laboratory, are input into a pre-trained basic model for fine adjustment to obtain a final detection model with 4 indexes of emotion, character, vitality and moods, and the model is trained by using a Fer2013 face expression data set.
Fig. 5 shows a specific operation logic, after the face data is obtained, the face data is input into the corresponding detection models, and the emotion detection result, the mood detection result, the character detection result, and the vitality detection result are obtained through the detection models respectively.
It should be noted that after the character detection is performed by using the character detection model, the final vitality detection result is calculated by combining the age detection result and the physical strength detection result. Therefore, the accuracy of the vitality detection result can be effectively improved.
(2) Age detection method
In the embodiment, the age detection problem of a static face image is processed by using deep learning, the used deep learning model is a Convolutional Neural Network (CNN), an inclusion resnetv2 architecture is adopted in the module, and pretraining is performed on ImageNet for image classification; and then, inputting the face images of different ages and different sexes, which are crawled in Wikipedia, into the pre-trained basic model for fine adjustment to obtain a final age detection model, and training the model by using an APPA-REAL and UTKFace data set.
Fig. 6 shows a specific operation flow of the age detection method, which includes the following steps:
step 201, acquiring a face image of a user: the face image can be acquired from the set picture address through the CV2 module. Since imread in the CV2 module uses the BGR format, it is necessary to convert the face image from the BGR format to the RGB format for subsequent processing.
Step 202, obtaining a face area: and the Dlib library realizes face detection and acquires a face region based on the HOG direction gradient histogram.
Step 203, face clipping: since the face detection may not frame the entire face, this step adds edges to the face by clipping, thereby helping to obtain the entire face.
And step 204, inputting the face data into an age detection model to obtain an age detection result.
(3) Physical strength detection method
The present embodiment uses random forests to address the problem of physical strength estimation of static face images. Random forests are a learning method that can perform classification, regression, and other tasks. And obtaining a final classification result by constructing various decision trees and combining the classification results of the decision trees. Random decision forests correct the habit of decision trees over-fitting their training set, and random forests are generally superior to decision trees. When applied, the model can be trained using data collected from mosaics in northern tanzania and the casea-FaceV 5 asian face data set.
Fig. 7 shows a specific operation method of the physical strength detection method, comprising the following steps:
step 301, acquiring a face image of a user;
step 302, calling a human face feature point extraction model by using a detector of a Dlib library, and extracting 68 feature points of a human face from face data of the face image.
And step 303, extracting and calculating multi-dimensional face features based on other face information and 68 feature points, wherein the multi-dimensional face features at least comprise face features, distribution features of five sense organs and shape features of five sense organs.
And 304, selecting an optimal feature subset for the extracted multi-dimensional face features to obtain the optimal feature subset.
305, training a random forest model based on the optimal feature subset, and performing visualization through combination of different hyper-parameters to perform artificial modulation; and simultaneously, different hyper-parameter collocations are used to obtain the optimal random forest model.
And step 306, performing physical strength detection according to the final random forest model and the age detection result to obtain a physical strength detection result.
(4) Skin condition detection method
The skin quality detection method is subdivided into skin oil dryness detection and skin moisture detection, as shown in fig. 8, wherein the skin oil dryness detection method includes:
step 401, obtaining a face image of a user, and obtaining a skin image of a face region from the face image.
Step 402, converting the skin image into a gray image, and converting an index image after normalization.
And 403, extracting oil light characteristics from the index image.
Dividing the gray level image into 10 levels according to the brightness value (the range is 0-255) equally; and binarizing the gray level image, obtaining the brightness of the current environment by using other algorithms, and determining oil light pixel points with the identification index value having a larger contrast with the current environment.
And step 404, inputting the extracted oil-light characteristics into a skin oil dryness detection model, and combining an age detection result to obtain a skin oil dryness detection result.
In this step, the area of each oil light spot and the number of oil light spots can be counted by using a maximum connected region algorithm.
A method of skin moisture detection comprising:
and 405, extracting corresponding features according to the obtained skin dryness detection result and the characteristics of the face on the moisture.
And step 406, further utilizing the extracted features to obtain a skin moisture detection result through a skin moisture detection model. Wherein the skin moisture detection model can be trained by using the CASIA-faceV5 Asian face data set.
(5) Heart rate and blood pressure detection method
Human pulse waves are extracted based on the iPG technology, and the heart rate can be calculated. By combining with the hemodynamics theory, the indexes such as PTT, PAT and the like can be obtained through calculation, and the multi-dimensional characteristics (characteristics such as pulse wave conduction time, pulse wave conduction speed, K value and the like) are extracted, so that the dynamic change of the blood pressure is calculated.
Fig. 9 shows specific operating logic of the heart rate and blood pressure detection method, including the following steps:
and step 501, acquiring a face video and a fingertip video of a user.
Step 502, sequentially performing face recognition and tracking, skin pixel detection and face region segmentation on the face video to obtain face information, and analyzing the face information to obtain a face pulse wave signal.
Step 503, obtaining a fingertip pulse wave signal through the fingertip video.
And step 504, obtaining a heart rate detection result through a heart rate algorithm according to the facial pulse wave signal and the fingertip pulse wave signal.
And 505, extracting a multi-dimensional feature set according to the facial pulse wave signal and the fingertip pulse wave signal, wherein the multi-dimensional feature set comprises a pulse wave conduction feature set, a pulse wave morphological feature set and a pulse wave semantic feature set.
And 506, performing optimal feature subset selection on the multi-dimensional feature set to construct a blood pressure estimation model and establish a regression model, and calculating to obtain a blood pressure detection result.
It should be noted that various detection models (e.g., an emotion detection model, a mood detection mode, a personality detection model, a vitality detection model, a skin dryness detection model, a skin moisture detection model, a random forest model, etc.) related in the embodiment of the present invention may be implemented by using a known prior art, and are not described herein again.
Based on the same inventive concept, the embodiment of the present invention further provides a server, including: a memory for storing a computer program and a processor; the processor is adapted to execute the health status assessment method as described above when the computer program is invoked.
Referring to the system architecture shown in fig. 10 and the data processing logic diagram shown in fig. 11, an embodiment of the present invention further provides a health status evaluation system, which includes a server and a terminal device;
and the terminal equipment is used for acquiring the face image, the face video and the finger end video of the user and sending the face image, the face video and the finger end video to the server, and is also used for receiving and displaying various detection results and health state evaluation results returned by the server. Further, this terminal equipment includes preceding camera, back camera, flash light and WIFI transmission module, utilizes preceding camera to gather facial image data and facial video data carrying out image acquisition constantly, utilizes back camera to gather and indicates end video data, and usable flash light carries out the light filling in order to ensure the definition of the image of gathering at image acquisition in-process.
And the server is used for detecting various health parameters according to the face image, the face video and the finger video to obtain various detection results, performing health evaluation according to the health state evaluation method, and returning various detection results and health evaluation results to the terminal equipment.
In the system, the terminal equipment is only used for the acquisition of image data and the display function of the evaluation result and the health parameter, and the data analysis operation is completed by the server based on a big data algorithm, so that the accurate health state evaluation result can be obtained, the limitation of the memory capacity of the terminal equipment is avoided, the operation is simple and convenient, and the safety is high.
The terminal equipment can be any equipment which can collect user image data and has a display function, such as a smart phone, a tablet personal computer and a smart watch. The present embodiment does not particularly limit the specific type and number of the terminal devices.
It will be understood by those skilled in the art that all or part of the steps in the registration method of the NSA network described above may be performed by instructions or by instructions controlling associated hardware, and the instructions may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present invention further provide a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in the sensor control method provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.