Disclosure of Invention
The invention mainly aims to provide a fatigue driving monitoring method and a cloud server, aiming at fundamentally eliminating potential safety hazards caused by fatigue driving of a driver.
In order to achieve the purpose, the fatigue driving monitoring method provided by the invention comprises the following steps:
receiving a fatigue state judgment request sent by an intelligent terminal, wherein the fatigue state judgment request comprises current sign information of a driver;
analyzing and processing the current sign information to determine the driving state of the driver;
and when determining that the driver is in the fatigue driving state at present, generating a control instruction according to the fatigue driving state, and sending the control instruction to the intelligent terminal.
Preferably, when it is determined that the driver is currently in the fatigue driving state, generating a control instruction according to the fatigue driving state, and sending the control instruction to the intelligent terminal includes: when the driver is in the fatigue driving state at present, the fatigue driving state of the driver is compared with a preset driving fatigue grade, and a control instruction corresponding to the preset driving fatigue grade is sent out.
As a first preferred embodiment, the current sign information is a video image including the head, face or hand of the driver, and the analyzing the current sign information includes:
detecting the video image, and positioning a characteristic image in the video image;
analyzing the characteristic image and determining characteristic information of the characteristic image;
comparing the characteristic information of the characteristic image with a preset statistical model to determine the driving state of the driver;
and collecting a preset number of characteristic images as sample data, and analyzing the sample data according to a preset algorithm to obtain the preset statistical model.
As a preferred embodiment two, the current sign information is a pulse signal containing the heart rate, the respiration or the blood pressure of the driver; the analyzing and processing the current sign information, and the determining the driving state of the driver includes:
processing the pulse signal and converting the pulse signal into a digital signal;
judging whether the duration of the value of the digital signal exceeding the threshold value is greater than a first preset duration or not;
and if the duration exceeding the threshold is longer than a first preset duration, judging that the driving state of the driver is a fatigue driving state.
The invention also provides a fatigue driving monitoring method, which comprises the following steps:
collecting information containing current signs of a driver;
sending a fatigue state judgment request containing the current sign information to a cloud server;
the cloud server receives the fatigue state judgment request, wherein the fatigue state judgment request comprises current sign information of a driver;
analyzing and processing the current sign information to determine the driving state of the driver;
when determining that the driver is in a fatigue driving state at present, generating a control instruction according to the fatigue driving state;
and the intelligent terminal receives the control instruction and sends the control instruction to the vehicle controller.
The invention also provides a cloud server, comprising:
the remote receiving port is used for receiving a fatigue state judgment request sent by the intelligent terminal, and the fatigue state judgment request contains information of the current physical sign of the driver;
the judgment module is used for analyzing and processing the current sign information and determining the driving state of the driver;
and the instruction module is used for generating a control instruction according to the fatigue driving state and sending the control instruction to the intelligent terminal when determining that the driver is in the fatigue driving state at present.
Preferably, the instruction module further comprises:
and when the current fatigue driving state of the driver is determined, the method is used for comparing the fatigue driving state of the driver with a preset driving fatigue level and sending a control instruction corresponding to the preset driving fatigue level.
As a first preferred embodiment, the current sign information is a video image including the head, face or hand of the driver, and accordingly the determining module further includes:
the positioning sub-module is used for detecting the video image and positioning the characteristic image in the video image;
the analysis submodule is used for analyzing the characteristic image and determining the characteristic information of the characteristic image;
and the determining submodule is used for comparing the characteristic information with the preset statistical model and determining the driving state of the driver.
As a second preferred embodiment, the current sign information is a pulse signal containing the heart rate, the respiration, or the blood pressure of the driver, and accordingly the determining module includes:
the conversion submodule is used for processing the pulse signal and converting the pulse signal into a digital signal;
the comparison submodule is used for judging whether the duration of the value of the digital signal exceeding the threshold value is greater than a first preset duration or not;
and the judging submodule is used for judging that the driving state of the driver is a fatigue driving state when the duration of the value of the digital signal exceeding the threshold value is greater than a first preset duration.
According to the technical scheme, the information containing the current physical signs of the driver is collected at the side of the intelligent vehicle-mounted unit and is sent to the cloud server, after the cloud server receives the information, whether the driver is in a fatigue state at present is judged according to the information, a corresponding control instruction is generated according to the level of the fatigue state and is sent to the intelligent vehicle-mounted unit, and the intelligent terminal sends the control instruction to the vehicle controller and executes the corresponding control instruction; therefore, the fatigue state of the driver can be monitored, and once the driver is in fatigue driving, the vehicle controller is forced to execute a control command, such as decelerating and stopping the vehicle; thereby fundamentally eliminating the potential safety hazard brought by fatigue driving.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as up, down, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
Fig. 1 and 2 show a fatigue driving monitoring system according to an embodiment of the present invention. The architecture may involve:
the information acquisition element is used for continuously generating information data containing the physical signs of the driver, and can adopt a camera or wearable sensors such as a heart rate sensor, a blood pressure sensor and a breathing sensor, the camera can be arranged on the vehicle and is arranged right opposite to the driver so as to scan the posture of the driver; the sensor can be embedded in the safety belt; the intelligent terminal can realize wireless communication with the intelligent terminal through a network or Bluetooth;
the intelligent terminal can be an intelligent vehicle-mounted unit or a mobile terminal; the vehicle-mounted unit CAN be arranged on a vehicle, is communicated with media equipment and a controller on the vehicle through a CAN bus, and is communicated with a cloud server through a wireless network; the wireless communication system can be a vehicle-mounted computer system T-BOX (telematics BOX), a vehicle-mounted Diagnostic system OBD (On-Board Diagnostic) and the like which apply the wireless communication technology; the mobile terminal can be a mobile phone, a tablet personal computer, an intelligent bracelet, an intelligent watch and the like, and is in wireless communication with media equipment, a controller and a cloud server on the vehicle through a wireless network;
and the cloud server is communicated with the intelligent terminal through the remote interface and the network. A memory and a processor are configured;
a memory for storing executable instructions of the processor; the processor is used for acquiring video images or pulse data containing the signs of the driver; detecting the video image, and positioning a characteristic image in the video image; analyzing the characteristic image to determine the characteristic information of the characteristic image; determining the driving state of the driver according to the characteristic information; and processing and converting the pulse data, and determining the driving state of the driver according to the converted data.
The fatigue driving monitoring method according to the embodiment of the present invention is explained in detail based on the above system framework.
Referring to fig. 3, a fatigue driving monitoring method according to an embodiment of the present invention includes:
step S10, receiving a fatigue state judgment request sent by an intelligent terminal, wherein the fatigue state judgment request comprises current sign information of a driver;
the cloud server receives a fatigue state request by means of a remote receiving interface, wherein the fatigue state request comprises information data of current physical signs of a driver; and storing the request and the information data in a memory. The information of the current physical sign of the driver can be specifically a video image containing the body posture characteristics of the driver, and the video image can specifically contain the facial characteristics, the head characteristics or the hand characteristics of the driver; but also pulse signals including the vital signs of the driver, such as heart rate, blood pressure, breathing frequency, etc.
Step S20, analyzing and processing the current sign information, and determining the driving state of the driver;
calling the data, carrying out corresponding analysis processing on the data, and finally determining the driving state of the driver; and the vehicle is in a fatigue driving state or a non-fatigue driving state.
And step S30, when determining that the driver is in the fatigue driving state at present, generating a control instruction according to the fatigue driving state, and sending the control instruction to the intelligent terminal.
Once the driver is judged to be in the fatigue driving state, generating control instructions of different levels according to the level of the fatigue driving state, wherein the control instructions can be general deceleration driving signals or emergency braking signals; the cloud server can send the control instruction to the intelligent terminal through the wireless network.
According to the technical scheme, the information containing the current physical signs of the driver is collected at the side of the intelligent vehicle-mounted unit and is sent to the cloud server, after the cloud server receives the information, whether the driver is in a fatigue state at present is judged according to the information, a corresponding control instruction is generated according to the level of the fatigue state and is sent to the intelligent vehicle-mounted unit, and the intelligent terminal sends the control instruction to the vehicle controller and executes the corresponding control instruction; therefore, the fatigue state of the driver can be monitored, and once the driver is in fatigue driving, the vehicle controller is forced to execute a control command, such as decelerating and stopping the vehicle; thereby fundamentally eliminating the potential safety hazard brought by fatigue driving.
Further, the current physical sign information is different, and the corresponding manner of generating the control command is also different, which will be described in detail below:
in one mode, the fatigue driving monitoring method includes:
step 101, receiving a fatigue state judgment request of an intelligent terminal, wherein the request comprises information of current physical signs of a driver; the current sign information is a video image containing the head, face or hand of the driver;
the camera records a video image containing the physical signs of the driver, the video image at least comprises a facial image of the driver, so that whether the driver in the video is in a fatigue state, such as a dozing state, a uncomfortable body state and the like is judged through analyzing the facial image of the driver. The automobile steering system can further comprise hand images of a driver, if the hands of the driver are placed on the steering wheel, the driver can be judged to be in a fatigue state if the hands are separated from the steering wheel, and therefore speed reduction driving or emergency braking is sent to the controller, and driving safety hazards caused by fatigue of the driver are prevented.
Step 201, analyzing and processing the current sign information to determine the driving state of the driver;
in order to automatically identify a driver appearing in a video, frame images need to be extracted from the video, frame images including face images need to be further extracted, and face image identification is performed on the frame images by using a preset algorithm so as to identify driver information in the video. Specifically, the embodiment of the disclosure determines driver information in a video based on a pre-obtained recognition model and a face detection and tracking technology, and then presents the recognized driver information to a user watching the video;
step 201a, detecting the video image, and positioning a characteristic image in the video image;
if the method is applied to a cloud server, videos shot by the camera can be sent to the cloud server through a wireless network, the received video images are detected and analyzed by the cloud server positioning sub-module, if the method is applied to a driver's terminal (client device), the client application of the fatigue driving detection method can be installed in the camera, or the driver's terminal device such as a mobile phone is connected with the camera in a wired or wireless mode, and subsequent analysis is performed on the video images by application software in the mobile phone. The analysis process comprises the steps of firstly detecting a video image, wherein the video image is formed by frame images of one frame and one frame, scanning the frame images, positioning a characteristic image appearing in the frame image, and marking a position coordinate of the characteristic image in the frame image so as to determine the position information of the characteristic image.
Step 201b, analyzing the characteristic image, and determining characteristic information of the characteristic image;
the characteristic image may be classified into various types depending on the judgment criterion of the driving state of the driver, for example, a head image, an eye image, a mouth image, and a hand image including a steering wheel; according to the resolution of the camera, the eye image can be further divided into an iris image, a pupil image and the like. And the analysis submodule analyzes the characteristic images according to the respective attribute characteristics of the characteristic images and determines characteristic information contained in the characteristic images. For example, if the feature image is an eye image, the feature information may include: the opening degree value between the upper eyelid and the lower eyelid, pupil opening degree characteristic parameters, eyeball contour size values and the like.
Step 201c, comparing the characteristic information of the characteristic image with a preset statistical model, and determining the driving state of the driver; if the driver is in a fatigue driving state, the step 301 is carried out, and if the driver is in a non-fatigue driving state, the step 101 is returned to, and the information data at the next moment are monitored;
collecting a preset number of characteristic images as sample data, and analyzing the sample data according to a preset algorithm to obtain the preset statistical model; the interactive flow of signals refers to fig. 1.
In the step, the driving state of the driver can be judged according to the characteristic information; the driving state of the driver may be determined by comparing the characteristic information with the reference information. In addition, the driving state of the driver may be set to various states according to the demand, such as a wakeful state (non-fatigue driving state), a fatigue state (fatigue driving state), a semi-fatigue state (fatigue driving state), and the like;
the preset statistical model may include: a driver head movement range threshold; the feature image may include: a head image. Specifically, whether the moving track of the positioning coordinate exceeds a threshold of the moving range of the head of the driver or not can be judged through the determining submodule, and if the time length of exceeding the threshold is longer than a first preset time length, the driving state of the driver is judged to be a fatigue driving state. The threshold of the head movement range of the driver can be the threshold of the head movement range which is obtained by collecting a large amount of driving video information of the driver and is in accordance with the individual driving habits of the driver through analysis and modeling, for example, if some drivers like to listen to songs while driving, the head will shake along with the songs, and if some drivers are in a type of being attentive to driving without moving, the threshold of the head movement range determined by the two types of drivers will be different. When the head movement range exceeds the range threshold of the preset statistical model and continuously exceeds the threshold for a period of time, for example, the first preset duration is 4 seconds, it can be considered that the head of the driver is lowered for more than 4 seconds, and at this time, it is likely that the head is lowered due to dozing of the driver, and it is determined that the driver is in a fatigue driving state.
Alternatively, the preset statistical model may include: an eye opening threshold; the feature image may include: an eye image. Specifically, whether the characteristic parameter of the eye opening degree is smaller than a preset eye opening degree threshold value or not can be judged through the determining submodule, and if the duration smaller than the preset eye opening degree threshold value is larger than a second preset duration, the driving state of the driver is judged to be a fatigue driving state. For example, if the driver slightly closes both eyes due to fatigue, it is detected that the opening degree of the eyes becomes small, the opening degree is smaller than the preset eye opening degree threshold value, and the duration of being smaller than the opening degree threshold value lasts for a period of time, for example, the second preset duration is 5 seconds, it may be determined that the driver slightly closes both eyes for 5 seconds, and it is determined that the driver enters the fatigue driving state.
In summary, in the embodiment, the video image is divided into the preset step lengths, the frame image to be detected is extracted, and the frame image to be detected is analyzed, so that the data analysis quantity of the video image is greatly reduced, and the determination efficiency of the driving state is improved; and the driving state represented by the characteristic information is quickly and accurately judged by comparing the characteristic information in the characteristic images such as the head image, the eye image and the like with a preset statistical model.
Step 301, if it is determined that the driver is currently in a fatigue driving state, generating a control instruction according to the fatigue driving state, and sending the control instruction to the intelligent terminal.
Preferably, step 301 further comprises: when the current fatigue driving state of the driver is determined, comparing the fatigue driving state of the driver with a preset driving fatigue level, and sending a control instruction corresponding to the preset driving fatigue level.
And judging that the driver is in a fatigue driving state according to the current sign information, comparing the driving state of the driver with a preset driving fatigue level, and sending a control instruction corresponding to the preset driving fatigue level. The specific implementation can be realized by setting a plurality of comparison threshold values for a preset statistical model, and judging different driving states in different comparison threshold value ranges when the characteristic parameters in the characteristic image belong to different comparison threshold value ranges. For example, the eye opening threshold is 80%, 50%; the preset driving fatigue level can be correspondingly that no deceleration instruction is sent, a deceleration instruction is sent, and an emergency brake instruction is sent; assuming that the eye opening of the driver is greater than 80%, the driver is considered to be in an awake state, and does not send a deceleration instruction; when the eye opening degree of the driver is in the range of 80% -50%, the driver is considered to be in a semi-waking and semi-tired state, and a warning can be sent to remind the driver to stop the vehicle for rest and then drive the vehicle; when the eye opening of the driver is lower than 50%, a deceleration instruction is sent out, and the driver is reminded to revive or recommend to brake for rest; further, when the eye opening degree of the driver is detected to be 0, namely the eyes are closed, an emergency braking instruction can be sent to brake the vehicle so as to prevent safety accidents caused by fatigue driving.
In another aspect, the fatigue driving monitoring method includes:
102, receiving information of current signs of a driver contained in an intelligent terminal; the current sign information is a pulse signal containing the heart rate, the respiration or the blood pressure of the driver;
the difference from the above embodiment is that the current sign information is a pulse signal containing the heart rate, respiration or blood pressure of the driver; the pulse signals containing the heart rate, the respiration or the blood pressure of the driver and other signs are recorded through the sensor, whether the driver is in a fatigue or uncomfortable state or not is judged through analyzing the heart rate, the respiration or the blood pressure of the driver, if the heart rate exceeds 160 times/minute or is lower than 40 times/minute, the driver is considered to be in a body uncomfortable state such as palpitation, chest distress and the like caused by heart disease. The method can also comprise the breathing of the driver, such as the breathing frequency of the driver is more than 24 times/minute or less than 12 times/minute, and the driver can also be judged to be in a fatigue or uncomfortable state at the moment; the method can also comprise the blood pressure of the driver, such as the systolic pressure of the driver is higher than 150mmHg, the diastolic pressure is higher than 120mmHg, or the systolic pressure is lower than 80Kpa, and the diastolic pressure is lower than 50mmHg, and the driver can also be judged to be in a fatigue or uncomfortable state at the moment; therefore, emergency braking is sent to the controller, so that potential safety hazards caused by physical discomfort of a driver are prevented.
Step 202, analyzing and processing the current sign information to determine the driving state of the driver;
step 202R, processing the pulse signal and converting the pulse signal into a digital signal;
the processing process comprises the steps of firstly carrying out amplification, filtering, noise reduction and other processing on a pulse electric signal to improve the reliability of a sampling signal, and then converting the electric signal to obtain a digital signal, wherein the value of the digital signal directly reflects the heart rate, the respiratory rate or the blood pressure; the conversion sub-module may specifically include an amplifying circuit, a filtering circuit, and an analog-to-digital converter.
Step 202S, judging whether the value of the digital signal exceeds a threshold value and judging whether the duration time of exceeding the threshold value is greater than a first preset time;
the comparison sub-module calls the digital signal and compares the digital signal value with a preset threshold value, for example, the heart rate threshold value is set to be 40-160, if the digital signal value is 180, the digital signal value exceeds the threshold value through comparison, timing is carried out; if the next data is compared and still exceeds the threshold value, until the Nth group of data is smaller than the threshold value, calculating the time interval (the duration time that the value of the digital signal exceeds the threshold value) between the sampling times of the data of which the sampling time exceeds the threshold value for the first time from the Nth-1 group of data, and returning to the comparison of the next data if the time interval is smaller than a first preset time; if the time interval is greater than or equal to a first preset time, performing step 202T;
step 202T, if the time length exceeding the threshold value is longer than a first preset time length, judging that the driving state of the driver is a fatigue driving state;
if the time length exceeding the threshold value is longer than a first preset time length, for example, the heart rate lasts for 1 minute for 180 times/minute, the physical state of the driver is considered to be poor, and the driver is in a fatigue driving state;
and 302, if the driver is in a fatigue driving state, generating a control instruction according to the fatigue driving state, and sending the control instruction to the intelligent terminal. The signal interaction flow is shown in fig. 2.
Once the driver is judged to be in a fatigue driving state, generating control instructions of different levels according to the level of the fatigue state, wherein the control instructions can be general deceleration driving signals or emergency braking signals; the cloud server can send the control instruction to the intelligent terminal through the wireless network.
Preferably, when it is determined that the driver is currently in the fatigue driving state, the fatigue driving state of the driver is compared with a preset driving fatigue level, and a control instruction corresponding to the preset driving fatigue level is sent out.
And determining that the driver is in a fatigue driving state according to the current sign information, comparing the driving state of the driver with a preset driving fatigue level, and sending a control instruction corresponding to the preset driving fatigue level. The specific implementation can be realized by setting a plurality of comparison threshold values for a preset statistical model, and judging different driving states in different comparison threshold value ranges when the characteristic parameters in the characteristic image belong to different comparison threshold value ranges. For example, blood pressure thresholds are 80%, 100%, 120%; the preset driving fatigue level can be correspondingly that no deceleration instruction is sent, a deceleration instruction is sent, and an emergency brake instruction is sent; assuming that the systolic pressure of the driver is more than 80% of 150mmHg, the driver is considered to be in an awake state and does not send a deceleration instruction; when the systolic pressure of the driver is loitering between 80% and 100% of 150mmHg, the driver is considered to be in a semi-waking and semi-tired state, and a warning can be sent to remind the driver whether to stop for rest and then drive; when the contraction pressure of the driver is higher than 100% of 150mmHg, a deceleration instruction is sent out, and the driver is reminded to revive or recommend to brake for rest; further, when the driver's systolic pressure is detected to be higher than 120% of 150mmHg, an emergency braking command may be transmitted to brake the vehicle to prevent a safety accident caused by fatigue driving.
Referring to fig. 4, the cloud server includes a remote receiving port 10, a determining module 20, and an instruction module 30.
The remote receiving port 10 is used for receiving a fatigue state determination request sent by the intelligent terminal, wherein the fatigue state determination request contains information of the current physical sign of the driver; the cloud server receives a fatigue state judgment request through a wireless network by means of a remote receiving interface 10, wherein the fatigue state judgment request comprises information data of current physical signs of a driver; and storing the request and the information data in a memory.
The judgment module 20 is configured to analyze and process the current sign information, and determine a driving state of the driver; calling the data, carrying out corresponding analysis processing on the data, and finally determining the driving state of the driver; the driving state here includes being in a fatigue driving state or being in a non-fatigue driving state.
The instruction module 30 is configured to generate a control instruction according to the fatigue driving state when it is determined that the driver is currently in the fatigue driving state, and send the control instruction to the intelligent terminal.
Once the driver is judged to be in the fatigue driving state, generating control instructions of different levels according to the level of the fatigue driving state, wherein the control instructions can be general deceleration driving signals or emergency braking signals; the cloud server can send the control instruction to the intelligent terminal through the wireless network.
Further, when it is determined that the driver is currently in the fatigue driving state, the level of the fatigue state may be divided to generate a control command corresponding to the level of the fatigue state; in this embodiment, the instruction module 30 is specifically configured to compare the fatigue driving state of the driver with a preset driving fatigue level, and issue a control instruction corresponding to the preset driving fatigue level. The level of fatigue state and the corresponding control command can be set according to actual needs. The specific implementation can be realized by setting a plurality of comparison threshold values for a preset statistical model, and judging different driving states in different comparison threshold value ranges when the characteristic parameters in the characteristic image belong to different comparison threshold value ranges.
According to the technical scheme, the information containing the current physical signs of the driver is collected at the side of the intelligent vehicle-mounted unit and is sent to the cloud server, after the cloud server receives the information, whether the driver is in a fatigue state at present is judged according to the information, a corresponding control instruction is generated according to the level of the fatigue state and is sent to the intelligent vehicle-mounted unit, and the intelligent terminal sends the control instruction to the vehicle controller and executes the corresponding control instruction; therefore, the fatigue state of the driver can be monitored, and once the driver is in fatigue driving, the vehicle controller is forced to execute a control command, such as decelerating and stopping the vehicle; thereby fundamentally eliminating the potential safety hazard brought by fatigue driving.
Further, the above current physical sign information is different, and the specific architecture of the corresponding determination module 20 is also different, which will be described in detail below:
in an embodiment of the present invention, as shown in fig. 1, the current sign information is a video image containing the head, face or hand of the driver, and accordingly the determining module 20 further includes a positioning sub-module 201, an analyzing sub-module 202 and a determining sub-module 203.
The information of the current physical sign of the driver is a video image containing the body state characteristics of the driver, and the video image can specifically contain the facial characteristics, the head characteristics or the hand characteristics of the driver; the intelligent terminal is in communication connection with the camera regularly through the acquisition module, acquires the video image, and is connected to the network through the sending module by virtue of the network interface so as to package and send the physical sign information and the fatigue state request to the cloud server; the sending module and the remote receiving port 10 may be embodied as input/output (I/O) interfaces.
In this embodiment, in order to automatically identify a driver appearing in a video, frame images need to be extracted from the video, frame images including face images need to be further extracted, and then the face images are identified by using a preset algorithm on the frame images, so as to identify driver information in the video. Specifically, the present embodiment determines driver information in a video based on a pre-obtained recognition model and a face detection and tracking technique, and then presents the recognized driver information to a user viewing the video.
The positioning sub-module 201 is configured to detect the video image and position a feature image in the video image; the method comprises the steps of firstly detecting a video image, wherein the video image is formed by frame images of one frame and one frame, scanning the frame images of each frame in the detection process of the video image, positioning a characteristic image appearing in the frame image, marking a position coordinate of the characteristic image in the frame image, and determining the position information of the characteristic image.
The analysis submodule 202 is configured to analyze the feature image and determine feature information of the feature image; the characteristic image may be classified into various types depending on the judgment criterion of the driving state of the driver, for example, a head image, an eye image, a mouth image, and a hand image including a steering wheel; according to the resolution of the camera, the eye image can be further divided into an iris image, a pupil image and the like. And the analysis submodule analyzes the characteristic images according to the respective attribute characteristics of the characteristic images and determines characteristic information contained in the characteristic images. For example, if the feature image is an eye image, the feature information may include: the opening degree value between the upper eyelid and the lower eyelid, pupil opening degree characteristic parameters, eyeball contour size values and the like.
The determining submodule 203 is configured to compare the feature information with the preset statistical model, and determine a driving state of the driver. If the driver is in a fatigue driving state, the instruction module 30 is triggered, and if the driver is in a non-fatigue driving state; then the information data is returned to the remote receiving port 10 for monitoring at the next moment.
In another embodiment of the cloud server of the present invention, as shown in fig. 2, the difference from the above embodiment is that the current sign information is a pulse signal containing the heart rate, respiration or blood pressure of the driver, and accordingly the determining module 20 includes a converting submodule 204, a comparing submodule 205 and a determining submodule 206.
The conversion submodule 204 is configured to process the pulse signal and convert the pulse signal into a digital signal; recording pulse signals containing the heart rate, the respiration or the blood pressure of a driver and other physical signs through a sensor, firstly amplifying the pulse electrical signals, filtering, denoising and the like so as to improve the reliability of the sampling signals, and then converting the electrical signals to obtain digital signals, wherein the value of the digital signals directly reflects the heart rate, the respiration rate or the blood pressure; the conversion sub-module 204 may specifically include an amplifying circuit, a filtering circuit, an analog-to-digital converter, and the like.
The comparison submodule 205 is configured to determine whether a duration of the value of the digital signal exceeding a threshold is greater than a first preset duration; the comparison submodule calls the digital signal and compares the value of the digital signal with a preset threshold, and if the duration of the value of the digital signal exceeding the threshold is less than a first preset duration, the comparison is returned to compare the next data; if the duration of the value of the digital signal exceeding the threshold is greater than or equal to a first preset duration, the determining module 206 is triggered.
The judgment sub-module 206 is configured to judge that the driving state of the driver is the fatigue driving state and trigger the instruction module 30 when the duration that the value of the digital signal exceeds the threshold is greater than a first preset duration.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.