[go: up one dir, main page]

CN116985819A - Driving state monitoring method, device, equipment and storage medium - Google Patents

Driving state monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN116985819A
CN116985819A CN202310789333.6A CN202310789333A CN116985819A CN 116985819 A CN116985819 A CN 116985819A CN 202310789333 A CN202310789333 A CN 202310789333A CN 116985819 A CN116985819 A CN 116985819A
Authority
CN
China
Prior art keywords
target object
fatigue
determining
target
account
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310789333.6A
Other languages
Chinese (zh)
Inventor
汪勇泉
高敏
王友兰
韦彩霞
刘金平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202310789333.6A priority Critical patent/CN116985819A/en
Publication of CN116985819A publication Critical patent/CN116985819A/en
Priority to PCT/CN2024/100465 priority patent/WO2025001968A1/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a driving state monitoring method, a driving state monitoring device, driving state monitoring equipment and a storage medium, and belongs to the technical field of vehicles. According to the driving state monitoring method provided by the application, the face characteristics of the target object are extracted from the face image of the target object, the face characteristics are sent to the cloud server, the target account corresponding to the face characteristics is determined through the cloud server, and the account data of the target account are obtained. And the fatigue state of the target object is determined according to the face image, and when the target object is in fatigue driving, a reminding mode corresponding to the fatigue grade is acquired from the account data of the target account, and the target object is reminded by the reminding mode, so that safe driving of a user is ensured, and the occurrence rate of traffic accidents is reduced.

Description

Driving state monitoring method, device, equipment and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a driving state monitoring method, device, apparatus, and storage medium.
Background
With the increasing use of vehicles, the occurrence rate of traffic accidents is also increasing. One of the great factors is fatigue driving. When a user drives a vehicle for a long time, drowsiness, weakness of limbs, distraction and a decrease in judgment occur, and these conditions affect the safety of the user during driving, thereby causing traffic accidents. Therefore, how to ensure safe driving of users and reduce the occurrence rate of traffic accidents becomes a problem to be solved urgently in the field.
Disclosure of Invention
The embodiment of the application provides a driving state monitoring method, a device, equipment and a storage medium, which can remind a user in time when the user is in fatigue driving, thereby ensuring safe driving of the user and reducing the occurrence rate of traffic accidents. The technical scheme is as follows:
in one aspect, a driving state monitoring method is provided, the method including:
acquiring a face image of a target object of driving a vehicle;
based on the face image, extracting face features of the target object, and sending the face features to a cloud server; the cloud server is used for determining a target account of the target object based on the face characteristics and acquiring account data of the target account;
receiving account data of the target account sent by the cloud server;
based on the face image, determining the eye closing frequency and the eye closing time of the target object in a first preset time period;
determining a fatigue state of the target object based on the eye closing frequency and the eye closing duration;
under the condition that fatigue driving occurs to the target object, acquiring a first corresponding relation between a fatigue grade and a reminding mode from account data of the target account;
Determining a reminding mode corresponding to the fatigue level from the first corresponding relation based on the fatigue level corresponding to the fatigue state;
and prompting the target object based on the prompting mode.
In one possible implementation manner, the determining the fatigue state of the target object based on the eye closing frequency and the eye closing duration includes:
when the eye closing frequency reaches a first frequency and the eye closing time length is longer than the first time length and shorter than the second time length, determining that fatigue driving occurs on the target object and the fatigue grade is a first grade;
when the eye closing frequency reaches a second frequency and the eye closing time length is longer than the second time length and is shorter than a third time length, determining that fatigue driving occurs on the target object and the fatigue grade is a second grade;
when the eye closing frequency reaches a third frequency and the eye closing time period is longer than the third time period, determining that fatigue driving occurs on the target object and the fatigue grade is a third grade; wherein the third frequency is less than the second frequency, which is less than the first frequency.
In another possible implementation, the method further includes:
Determining a fixation area of the sight of the target object based on the face image;
under the condition that the gazing area is not matched with a preset area, determining that the target object is in a distraction state;
acquiring a second corresponding relation between the abnormal state and the reminding mode from account data of the target account;
acquiring a first reminding mode corresponding to the distraction state from the second corresponding relation;
and reminding the target object based on the first reminding mode.
In another possible implementation, the method further includes:
detecting a hand state of the target object based on the face image;
determining a distance between the hand and the ear of the target object under the condition that the hand of the target object is detected to hold the electronic equipment;
under the condition that the distance is smaller than a preset distance, determining that the target object is in a call state;
acquiring a second reminding mode corresponding to the call state from the second corresponding relation;
and reminding the target object based on the second reminding mode.
In another possible implementation, the method further includes:
detecting a mouth state of the target object under the condition that the hand of the target object is detected to hold the fuming object;
Determining that the target object is in a smoking state if the fuming object is detected to be contained in the mouth of the target object;
acquiring the smoke concentration in the vehicle, and determining the environment of the vehicle under the condition that the smoke concentration is larger than a preset concentration;
determining a ventilation mode based on the environment of the vehicle;
and ventilation is performed based on the ventilation mode.
In another possible implementation, the method further includes:
determining the emotion type of the target object based on the face features;
acquiring identity information of the target object from account data of the target account under the condition that the emotion type of the target object is positive emotion;
and broadcasting a voice message or recommending music for the target object based on the positive emotion and the identity information.
In another possible implementation, the method further includes:
acquiring position information of a main driving seat and position information of a rearview mirror from account data of the target account;
based on the position information of the main driving seat and the position information of the rearview mirror, the position of the main driving seat and the position of the rearview mirror are adjusted.
In another aspect, there is provided a driving state monitoring device, the device including:
the first acquisition module is used for acquiring a face image of a target object of driving the vehicle;
the extraction module is used for extracting the face characteristics of the target object based on the face image and sending the face characteristics to a cloud server; the cloud server is used for determining a target account of the target object based on the face characteristics and acquiring account data of the target account;
the receiving module is used for receiving the account data of the target account sent by the cloud server;
the first determining module is used for determining the eye closing frequency and the eye closing time of the target object in a first preset time based on the face image;
a second determining module for determining a fatigue state of the target object based on the eye closing frequency and the eye closing duration;
the second acquisition module is used for acquiring a first corresponding relation between the fatigue grade and the reminding mode from the account data of the target account under the condition that the target object is in fatigue driving;
the third determining module is used for determining a reminding mode corresponding to the fatigue level from the first corresponding relation based on the fatigue level corresponding to the fatigue state;
And the reminding module is used for reminding the target object based on the reminding mode.
In a possible implementation manner, the second determining module is configured to determine that fatigue driving occurs in the target object and the fatigue grade is a first grade when the eye closing frequency reaches a first frequency and the eye closing time length is greater than a first time length and less than a second time length; when the eye closing frequency reaches a second frequency and the eye closing time length is longer than the second time length and is shorter than a third time length, determining that fatigue driving occurs on the target object and the fatigue grade is a second grade; when the eye closing frequency reaches a third frequency and the eye closing time period is longer than the third time period, determining that fatigue driving occurs on the target object and the fatigue grade is a third grade; wherein the third frequency is less than the second frequency, which is less than the first frequency.
In another possible implementation, the apparatus further includes:
a fourth determining module, configured to determine a gaze area of the target object line of sight based on the face image;
a fifth determining module, configured to determine that the target object is in a distracted state when the gaze area does not match a preset area;
The third acquisition module is used for acquiring a second corresponding relation between the abnormal state and the reminding mode from the account data of the target account;
a fourth obtaining module, configured to obtain a first alert mode corresponding to the distraction state from the second correspondence;
the reminding module is further used for reminding the target object based on the first reminding mode.
In another possible implementation, the apparatus further includes:
the first detection module is used for detecting the hand state of the target object based on the face image;
a sixth determining module, configured to determine a distance between a hand and an ear of the target object when the hand of the target object is detected to hold the electronic device;
a seventh determining module, configured to determine that the target object is in a call state when the distance is smaller than a preset distance;
a fifth obtaining module, configured to obtain a second alert mode corresponding to the call state from the second correspondence;
the reminding module is further used for reminding the target object based on the second reminding mode.
In another possible implementation, the apparatus further includes:
The second detection module is used for detecting the mouth state of the target object under the condition that the hand of the target object is detected to hold the fuming object;
an eighth determining module, configured to determine that the target object is in a smoking state when it is detected that the smoking object is contained in the mouth of the target object;
an eighth determining module, configured to obtain a smoke concentration in the vehicle, and determine an environment in which the vehicle is located when the smoke concentration is greater than a preset concentration;
a ninth determining module, configured to determine a ventilation mode based on an environment in which the vehicle is located;
and the ventilation module is used for ventilating based on the ventilation mode.
In another possible implementation, the apparatus further includes:
a tenth determining module, configured to determine, based on the face feature, a mood type of the target object;
a sixth obtaining module, configured to obtain, when the emotion type of the target object is a positive emotion, identity information of the target object from account data of the target account;
and the recommending module is used for broadcasting voice messages or recommending music for the target object based on the positive emotion and the identity information.
In another possible implementation, the apparatus further includes:
a seventh obtaining module, configured to obtain, from account data of the target account, position information of a main driving seat and position information of a rearview mirror;
and the adjusting module is used for adjusting the position of the main driving seat and the position of the rearview mirror based on the position information of the main driving seat and the position information of the rearview mirror.
In another aspect, a control apparatus is provided, the control apparatus including a processor and a memory, the memory storing at least one program code, the at least one program code loaded and executed by the processor to implement the driving state monitoring method of any one of the above.
In another aspect, a computer readable storage medium having at least one program code stored therein is provided, the at least one program code loaded and executed by a processor to implement the driving state monitoring method of any one of the above.
In another aspect, a computer program product is provided, in which at least one program code is stored, which is loaded and executed by a processor to implement the driving state monitoring method according to any of the above.
The embodiment of the application provides a driving state monitoring method, which comprises the steps of extracting face features of a target object from a face image of the target object, sending the face features to a cloud server, determining a target account corresponding to the face features through the cloud server, and obtaining account data of the target account. And the fatigue state of the target object is determined according to the face image, and when the target object is in fatigue driving, a reminding mode corresponding to the fatigue grade is acquired from the account data of the target account, and the target object is reminded by the reminding mode, so that safe driving of a user is ensured, and the occurrence rate of traffic accidents is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a schematic diagram of an implementation environment of a driving state monitoring method according to an embodiment of the present application;
FIG. 2 is a flow chart of a driving state monitoring method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of interaction between a control device and a cloud server according to an embodiment of the present application;
fig. 4 is a schematic diagram of data processing by a control device through a DMS according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a driving state monitoring device according to an embodiment of the present application;
fig. 6 is a block diagram of a control device according to an embodiment of the present application.
Detailed Description
In order to make the technical scheme and advantages of the present application more clear, the following further describes the embodiments of the present application in detail.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprising," "including," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, identity information, emotion type, account data and the like referred to in the present application are all acquired under the condition of sufficient authorization.
Fig. 1 is a schematic diagram of an implementation environment of a driving state monitoring method according to an embodiment of the present application, where the implementation environment includes: the camera module 10, the control device 11, the cloud server 12 and the T-BOX13 (information processor) are located in the same vehicle, and the vehicle may be a fuel vehicle, an electric vehicle or a hybrid vehicle, which is not limited in particular.
In the embodiment of the present application, the camera module 10 may collect a face image of a target object and send the face image to the control device 11. The control device 11 extracts the face features in the face image, sends the face features to the T-BOX13, the T-BOX13 forwards the face features to the cloud server 12, the cloud server 12 compares the face features with the face features stored in the cloud database, determines a target account corresponding to the face features, obtains account data of the target account, sends the account data of the target account to the control device 11 through the T-BOX13, and the control device 11 receives and locally stores the account data of the target account.
And, after the control device 11 acquires the face image, the driving state of the target object may be determined based on the face image, the driving state including: fatigue state, distraction state, call state, smoking state, etc., and according to the driving state of the target object, acquiring a corresponding reminding mode from the account data of the target account, thereby reminding the target object.
The control device 11 may be a vehicle or a vehicle controller, where the vehicle is a vehicle-mounted infotainment product, and includes a vehicle host and a display screen. Cloud server 12 may be at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The camera module 10 is a camera for capturing a target object located on the main driving seat. The T-BOX13 provides network transmission capability, and realizes data transmission between the control device 11 and the cloud server 12.
In the embodiment of the application, the control equipment can monitor the fatigue state, the distraction state, the conversation state and the smoking state of the target object and remind the target object in a corresponding reminding mode. The following first describes the process of monitoring the fatigue state.
Fig. 2 is a flowchart of a driving state monitoring method provided by an embodiment of the present application, which is executed by a control device, referring to fig. 2, and includes:
step 201: the control apparatus acquires a face image of a target object driving a vehicle.
The target object is an object to drive the vehicle, that is, an object located on the main driving seat.
In the step, the camera module can acquire the face image in real time or periodically, and the acquired face image is sent to the control equipment under the condition that the face image is acquired, and correspondingly, the control equipment receives the face image sent by the camera module.
If the camera module is in an on state, the camera module can directly collect images. If the camera module is in the closed state, the control device can display a state monitoring option on the display screen, respond to the triggering operation of the state monitoring option, and send an opening instruction to the camera module, and the camera module is in the open state, and in the open state, face images are acquired in real time or periodically.
In the embodiment of the application, after the control equipment acquires the face image, the definition and the integrity of the face image are determined, if the definition or the integrity does not meet the requirements, a prompt message is displayed or voice broadcast so as to remind a user to adjust the posture, and therefore the camera module shoots the face image meeting the requirements. And if the definition and the integrity meet the requirements, executing the subsequent steps.
Step 202: based on the face image, the control device extracts the face characteristics of the target object and sends the face characteristics to the cloud server.
The control equipment extracts the face characteristics from the face image and sends the face characteristics to the cloud server through the T-BOX. And the cloud server compares the face features with a plurality of face features stored in a cloud database, and searches target face features matched with the face features from the face features. If the matched target face features are found, determining a target account corresponding to the target face features based on the corresponding relation between the face features and the accounts, and further acquiring account data of the target account from a cloud database, see fig. 3.
The control device can extract the face features from the face images, and can transmit the face images to the DMS (Driver Monitor System, driving fatigue detection system), the DMS transmits the face images to the visual perception service module in a ZMQ communication mode, the visual perception service module extracts the face features from the face images, the face features are returned to the DMS, the DMS is transmitted to the control device, and the control device is transmitted to the cloud server through the T-BOX. And, the DMS may also acquire the vehicle body data, and transmit the vehicle body data to the vision sensing service module, where the vehicle body data is processed by the vision sensing service module, where the vehicle body data is CAN (Controller Area Network ) data, see fig. 4.
In addition, the account data of the target account includes a first corresponding relationship, a second corresponding relationship, identity information of the target object, vehicle body data, and the like. The first correspondence is a correspondence between the fatigue level and the alert mode, the second correspondence is a correspondence between the abnormal state and the alert mode, the identity information of the target object includes the sex, age, etc. of the target object, and the vehicle body data includes the position information of the main driving seat, the position information of the rear view mirror, the height information of the steering wheel, and other personalized driving habit setting information, such as the brightness of the atmosphere lamp in the vehicle, the type of music style, the HUD (Head Up Display), the ADAS (Advanced Driving Assistance System ), the audio host, etc., which are not particularly limited. The audio is capable of providing bluetooth, cell phone interconnect, navigation, voice, phone, music, video, games, life services, applets, etc. functions to the user.
If the cloud server cannot find the matched face features, a first notification message is returned to the control device through the T-BOX, the control device displays a registration interface based on the first notification message, and the target object is registered based on the registration interface.
The registration process may be: the target object triggers the control device to display a registration interface that includes a plurality of registration options in which the target object can enter registration information. In response to the registration information being obtained, the control device sends the registration information to the cloud server through the T-BOX, the cloud server generates a target account based on the registration information, and a corresponding relation between the face features and the target account is established.
Of course, the target object may also be actively registered when the vehicle is driven for the first time, and the registration process is the same as the above registration process, and will not be described again.
In the embodiment of the application, the corresponding relation between the face features and the account numbers is pre-established in the cloud server, and the corresponding account numbers are determined according to the face features, so that the account number data of the corresponding account numbers can be obtained through the face features when a user drives other vehicles, and the user can be reminded in a corresponding reminding mode when the user is in a fatigue state or other abnormal states in the process of driving other vehicles, thereby improving the safety of the user in the driving process.
Step 203: and the control equipment receives account data of the target account sent by the cloud server.
After the cloud server acquires the account data of the target account, the cloud server sends the account data of the target account to the control equipment through the T-BOX, and the control equipment receives and locally stores the account data of the target account.
In the embodiment of the application, the control equipment locally stores the account data of the target account, and can directly acquire the corresponding reminding mode from the local account data later, so that interaction with a cloud server is not needed, and the time is greatly shortened.
In addition, when the cloud server sends the account data of the target account to the control device, the corresponding relation between the target face features and the target account can be sent, and the control device locally stores the corresponding relation between the target face features and the target account, so that when the next time the target object drives the vehicle, the comparison of the face features can be directly carried out from the local, interaction with the cloud server is not needed, and the time is further shortened. In order to avoid the influence of excessive local data storage amount on the processing speed of the equipment, the control equipment can delete account data with less use frequency and the corresponding relation between the account and the face characteristics at regular intervals.
Step 204: the control device determines the eye closing frequency and the eye closing time of the target object in a first preset time period based on the face image.
For each face image, the control device determines the eye closure degree of a target object in the face image, determines whether the target object is closed according to the eye closure degree, and determines the eye closure frequency and the eye closure duration of the target object in a first preset duration based on a plurality of face images adjacent to each other in front of and behind the face image under the condition that the target object is closed.
The control device may determine the eye-closing frequency and the eye-closing time period by itself, or may determine the eye-closing frequency and the eye-closing time period by DMS, which is not particularly limited. The first preset duration may be set and changed as required, for example, the first preset duration is 30s or 40s, which is not limited in detail.
Step 205: the control device determines a fatigue state of the target object based on the eye closing frequency and the eye closing time period.
In one possible implementation manner, when the eye closing frequency reaches the first frequency and the eye closing time length is longer than the first time length and is shorter than the second time length, the control device determines that fatigue driving occurs on the target object and the fatigue grade is the first grade.
In another possible implementation manner, when the eye closing frequency reaches the second frequency and the eye closing time period is longer than the second time period and is shorter than the third time period, the control device determines that fatigue driving occurs on the target object and the fatigue grade is the second grade.
In another possible implementation manner, in a case where the eye-closing frequency reaches a third frequency and the eye-closing time period is longer than a third time period, the control device determines that fatigue driving occurs on the target object, and the fatigue grade is a third grade.
In the above three implementations, the third frequency is smaller than the second frequency, and the second frequency is smaller than the first frequency. And the third level of fatigue is greater than the second level of fatigue, which is greater than the first level of fatigue. For example, the third grade is heavy fatigue, the second grade is moderate fatigue, and the first grade is light fatigue.
The first frequency, the second frequency, the third frequency, the first duration, the second duration and the third duration can be set and changed according to requirements, for example, the first frequency is 9 times, the second frequency is 3 times, the third frequency is 2 times, the first duration is 1s, the second duration is 3s, and the third duration is 5s. Correspondingly, if the eye closing frequency of the target object within 30s reaches 9 times, and the eye closing time of each time is longer than 1s and smaller than 2s, the control device determines that the target object is slightly tired. If the eye closing frequency of the target object within 30s reaches 3 times, and the eye closing time of each time is longer than 3s and smaller than 4s, the control equipment determines that the target object has moderate fatigue. If the eye closing frequency of the target object within 30s reaches 2 times and the eye closing time of each time is longer than 5s, the control equipment determines that the target object is severely tired.
In the embodiment of the present application, the control device may execute steps 202-203 first and then execute steps 204-205, or may execute steps 204-205 first and then execute steps 202-203, which is not limited in particular.
Step 206: under the condition that fatigue driving occurs to a target object, the control equipment acquires a first corresponding relation between the fatigue grade and the reminding mode from account data of a target account.
The control equipment locally stores account data of the target account, and under the condition that fatigue driving occurs on a target object, obtains a first corresponding relation from the locally stored account data of the target account, wherein the first corresponding relation is a corresponding relation between fatigue grades and reminding modes, and different fatigue grades correspond to different reminding modes.
If the target object drives the host vehicle for the first time, the reminding mode corresponding to the same fatigue level in the first corresponding relation may be a default reminding mode of the control device, and the same fatigue level may correspond to one or more reminding modes. If the target object does not drive the host vehicle for the first time, the reminding mode corresponding to the same fatigue level in the first corresponding relation is an effective reminding mode, the effective reminding mode is a reminding mode that the fatigue degree of the target object can be relieved, and the effective reminding mode can be one or more.
In the embodiment of the application, different account numbers correspond to different reminding modes in the first corresponding relation, namely, different reminding modes correspond to different objects when fatigue driving occurs, and the reason is that some reminding modes are effective for the first object and the other reminding modes are effective for the second object. For example, when the first object and the second object are both in moderate fatigue, the effective reminding mode of the first object is that the control device simulates other objects to make virtual calls with the user, and the effective reminding mode of the second object is that the control device controls the knocking device to knock the target object.
In the embodiment of the application, the control equipment establishes the corresponding relation between the fatigue grade and the reminding mode according to the effective reminding mode corresponding to the target object, so that the target object is effectively reminded when the target object is in fatigue driving, and the fatigue state of the target object is effectively relieved.
Step 207: based on the fatigue grade corresponding to the fatigue state, the control equipment determines a reminding mode corresponding to the fatigue grade from the first corresponding relation.
If the target object drives the host vehicle for the first time, the control device randomly selects one reminding mode from the reminding modes corresponding to the fatigue level in the first corresponding relation or selects one reminding mode according to a certain sequence based on the fatigue level.
For example, the first level of alert mode includes sending out an alarm and ventilating the window, the second level of alert mode includes the control device simulating other objects to make virtual calls with the user, and the control device controlling the knocking device to knock the back of the target object, the third level of alert mode includes the control device controlling the spraying device to spray liquid to the prescribed direction of the target object, and the control device taking over the steering wheel, controlling the vehicle to run. Wherein, sprinkler can set up in one side of main seat that drives, and knocking device can set up in the inside of main seat back that drives. And when the fatigue grade of the target object is the first grade, the control equipment randomly selects one reminding mode from the two reminding modes corresponding to the first grade.
If the target object does not drive the host vehicle for the first time, the control device determines an effective reminding mode corresponding to the fatigue level from the first corresponding relation based on the fatigue level.
The effective reminding mode corresponding to the first level is windowing ventilation, the effective reminding mode corresponding to the second level is virtual conversation between other objects simulated by the control equipment and a user, and the effective reminding mode corresponding to the third level is that the control equipment controls the spraying device to spray liquid to the prescription of the target object. And when the fatigue level of the target object is the second level, the control equipment selects an effective reminding mode corresponding to the second level.
The foregoing are just a few reminding modes, and in practical application, the target object may be reminded by other reminding modes, for example, playing music or increasing the volume of music, which is not limited in particular.
Step 208: the control device reminds the target object based on the reminding mode.
The control device reminds the target object based on the reminding mode corresponding to the fatigue level.
In the embodiment of the application, if the target object drives the host vehicle for the first time, the control device determines whether the fatigue degree of the target object is relieved within the second preset time period, namely, whether the fatigue level is reduced after reminding the target object based on the corresponding reminding mode. If the fatigue level of the target object decreases, for example, from moderate fatigue to mild fatigue or from mild fatigue to wakefulness, the control device uses the alert mode as an effective alert mode and updates the first correspondence based on the effective alert mode. If the fatigue level of the target object is not reduced, the control device reselects the reminding mode from the reminding modes corresponding to the fatigue level, and reminds the target object based on the reselected reminding mode.
If the target object does not drive the host vehicle for the first time, the control device determines whether the fatigue degree of the target object is relieved within a second preset time period, namely, whether the fatigue level is reduced after reminding the target object based on an effective reminding mode. If the fatigue level of the target object is lowered, no other operation is required. If the fatigue level of the target object is not reduced and the effective reminding modes corresponding to the fatigue level are multiple, the control equipment reselects the effective reminding modes. If the fatigue level of the target object is not reduced and the effective reminding mode corresponding to the fatigue level is one, the control equipment selects a reminding mode from the default reminding modes to remind the target object.
The process of determining whether the fatigue degree of the target object is relieved within the second preset time period by the control device is the same as the process of determining the fatigue state of the target object, and will not be repeated here.
In the embodiment of the application, if the control device takes over the steering wheel and the control device controls the reminding mode of the vehicle running, the control device can output the voice message first to remind the target object that the control device takes over the steering wheel and then controls the vehicle running.
The control device can control the vehicle to run according to the type of the road. For example, if the road on which the vehicle is located is an expressway, the control device may determine the location of the service area closest to the current location, and control the vehicle to travel toward the service area. For another example, if the road on which the vehicle is located is an urban road, the control device may determine a parking lot or a parking permitted area closest to the current position, and control the vehicle to travel to the parking lot or the parking permitted area.
The embodiment of the application provides a driving state monitoring method, which comprises the steps of extracting face features of a target object from a face image of the target object, sending the face features to a cloud server, determining a target account corresponding to the face features through the cloud server, and obtaining account data of the target account. And the fatigue state of the target object is determined according to the face image, and when the target object is in fatigue driving, a reminding mode corresponding to the fatigue grade is acquired from the account data of the target account, and the target object is reminded by the reminding mode, so that safe driving of a user is ensured, and the occurrence rate of traffic accidents is reduced.
In the embodiment of the application, a user registers an account through face recognition when using the vehicle for the first time, and the following boarding can automatically log in without sense and call out common setting items. In the daily driving process, the age, sex and driving behavior of the user can be automatically identified, the judgment is carried out by combining the scene brain technology, and the user is reminded by broadcasting voice, telephone, music and the like. These active reminder activities may enhance driving safety and provide effective driving pleasure.
Next, a process of monitoring the state of the distraction will be described.
In the embodiment of the application, the control device can determine the gazing area of the sight of the target object based on the face image; under the condition that the gazing area is not matched with the preset area, determining that the target object is in a distraction state; acquiring a second corresponding relation between the abnormal state and the reminding mode from account data of the target account; acquiring a first reminding mode corresponding to the distraction state from the second corresponding relation; and reminding the target object based on the first reminding mode.
In this implementation, the control device extracts an eye feature from the face image, determines a line-of-sight direction of the target object according to the eye feature, and determines a gaze area of the line of sight according to the line-of-sight direction. And then determining whether the gazing area is a preset area, if the gazing area is the preset area, determining that the target object is not distracted, and if the gazing area is not the preset area, namely the gazing area is not matched with the preset area, determining that the target object is in a distracted state.
The preset area may be set as required and is, for example, a rearview mirror, a front windshield, a mirror, an instrument panel, etc., which is not particularly limited.
If the control device cannot determine the sight line direction of the target object according to the eye features, the control device can determine the gazing area according to the head pose direction of the target object, and further determine whether the target object is in a distraction state.
And under the condition that the target object is in a distraction state, the control equipment acquires a second corresponding relation from account data of the target account, wherein the second corresponding relation is a corresponding relation between an abnormal state and a reminding mode, and one abnormal state corresponds to one or more reminding modes. The control equipment acquires a first reminding mode corresponding to the distraction state from the second corresponding relation, and reminds the target object based on the first reminding mode.
If the first reminding mode corresponding to the distraction state is one, the control equipment directly reminds the target object based on the first reminding mode. If the first reminding modes corresponding to the distraction state are multiple, the control device can randomly select one reminding mode from the multiple first reminding modes to remind the target object. After reminding the target object through the selected first reminding mode, determining whether the target object is still in a distraction state, if so, replacing the first reminding mode, and if not, marking the first reminding mode, and then reminding the target object by adopting the first reminding mode preferentially. The first reminding mode can be set and changed according to the requirement, which is not particularly limited.
The following describes a call state monitoring procedure.
In the embodiment of the application, the control equipment detects the hand state of the target object based on the face image; under the condition that the hand of the target object is detected to hold the electronic equipment, determining the distance between the hand and the ear of the target object; under the condition that the distance is smaller than the preset distance, determining that the target object is in a call state; acquiring a second reminding mode corresponding to the call state from the second corresponding relation; and reminding the target object based on the second reminding mode.
In this implementation manner, the control device detects a hand state of the target object from the face image, determines whether the object is an electronic device if the hand of the target object is detected and the hand holds the object, and determines that the hand of the target object holds the electronic device if the object is the electronic device. In this case, the control device determines the distance between the hand and the ear of the target object, and if the distance is smaller than the preset distance, determines that the target object is in a call state. Or the control device may input the face image into the recognition model, score the call behavior of the target object through the recognition model, and determine that the target object is in a call state when the score exceeds a threshold value.
And under the condition that the target object is in the call state, the control equipment acquires a second reminding mode corresponding to the call state from the second corresponding relation, and reminds the target object based on the second reminding mode.
Next, a process of monitoring the smoking status will be described.
In the embodiment of the application, the control equipment detects the hand state of the target object based on the face image; detecting the mouth state of the target object under the condition that the hand of the target object is detected to hold the fuming object; under the condition that the fuming object is detected to be contained in the mouth of the target object, determining that the target object is in a smoking state; acquiring a third reminding mode corresponding to the smoking state from the second corresponding relation; and reminding the target object based on the third reminding mode.
The control equipment can also acquire the smoke concentration in the vehicle, and determine the environment of the vehicle under the condition that the smoke concentration is greater than the preset concentration; determining a ventilation mode based on the environment of the vehicle; ventilation is performed based on a ventilation mode.
In the implementation manner, the control device detects the hand state of the target object from the face image, if the hand of the target object is detected and the hand holds the object, whether the object is a fuming object is determined, and if the object is a fuming object, the mouth state of the target object is detected; and if the smoke object is detected to be contained in the mouth of the target, determining that the target object is in a smoking state. Or the control device may input the face image into the recognition model, score the smoking behavior of the target object through the recognition model, and determine that the target object is in a smoking state when the score exceeds a threshold value.
And under the condition that the target object is in the smoking state, the control equipment acquires a third reminding mode corresponding to the smoking state from the second corresponding relation, and reminds the target object based on the third reminding mode.
The method includes that the process of reminding the target object by the control device based on the second reminding mode or the third reminding mode is the same as the process of reminding the target object based on the first reminding mode, and the description is omitted here.
And, in the case that the target object is in a smoking state, the control device can also detect the smoke concentration through a sensor in the vehicle, and in the case that the smoke concentration is greater than the preset concentration, the environment in which the vehicle is located is determined. Different environments correspond to different ventilation modes. Ventilation is performed based on a corresponding ventilation mode, so that air quality in the vehicle is improved. For example, if the vehicle is in a rainy or snowy environment, the ventilation mode may be to turn on the air conditioner. The vehicle is in a sunny day, and the ventilation mode can be window ventilation.
In the embodiment of the application, the control device can also broadcast or recommend the target object according to the emotion type of the target object. The process may be: the control equipment determines the emotion type of the target object based on the face characteristics; under the condition that the emotion type of the target object is positive emotion, acquiring identity information of the target object from account data of a target account; based on the positive emotion and the identity information, broadcasting a voice message or recommending music for the target object.
In the implementation manner, the control device determines the mouth characteristics of the target object based on the face characteristics, and determines that the target object is in a happy state, that is, the emotion type of the target object is positive emotion if the mouth characteristics of the target object are mouth corners are raised, mouth is slightly opened or teeth are exposed in an opened state.
Under the condition that the emotion type of the target object is positive emotion, the control equipment acquires identity information of the target object, wherein the identity information comprises gender, age and the like, and the control equipment recommends music matched with the positive emotion and the identity information for the target object or broadcasts voice messages matched with the positive emotion and the identity information based on the positive emotion and the identity information.
In the embodiment of the application, the control device can also automatically adjust the positions of the seat and the rearview mirror. The process may be: the control equipment acquires the position information of the main driving seat and the position information of the rearview mirror from account data of a target account; the position of the main drive seat and the position of the rear view mirror are adjusted based on the position information of the main drive seat and the position information of the rear view mirror.
In this implementation manner, if the target object does not drive the host vehicle for the first time, the account data of the target account stores the position information of the corresponding main driving seat and the position information of the rearview mirror when the target object drives the host vehicle for the history. After the target object gets on the vehicle, the control device automatically adjusts the position of the main drive seat and the position of the rear view mirror based on the historically stored position information of the main drive seat and the position information of the rear view mirror.
If the target object drives the vehicle for the first time, the target object can manually adjust the position of the main driving seat and the position of the rearview mirror, and the control device stores the position of the main driving seat and the position of the rearview mirror, so that the target object can automatically adjust the position of the main driving seat and the position of the rearview mirror when driving the vehicle next time, thereby meeting the requirements of the target object.
In the embodiment of the application, the control device can also acquire the height of the steering wheel from the account data of the target account, and automatically adjust the height of the steering wheel after the target object gets on the vehicle.
In the embodiment of the application, the control device can also heat the main driving seat. The process may be: under the condition that the vehicle is in a power-on state, the control equipment acquires the temperature outside the vehicle, if the temperature is smaller than a preset temperature, the heating function of the main driving seat is started, or an interactive interface is displayed, a heating starting function button is displayed on the interactive interface, and the control equipment starts the heating function of the main driving seat in response to the detection of the triggering operation of the heating starting function button.
After the control device starts the heating function of the main driving seat, a voice message can be output and/or a message bullet frame can be displayed on the interactive interface, so that a user knows that the heating function is started.
The main driving seat can also support more seat posture adjustment, besides the conventional adjustment of the level, the height and the backrest, the main driving seat also supports the adjustment of the directions of the leg rest, the shoulder part and the like to realize comfortable sitting postures, and simultaneously supports the functions of heating, ventilation, massage, memory and the like.
In the embodiment of the application, the driving state of the target object can be monitored by adopting algorithms such as fatigue sensing, gazing sensing, distraction sensing, behavior sensing and the like, and the target object is timely reminded when the target object is in an abnormal driving state, so that the safe driving of the target object is ensured.
And the new-generation vehicle-mounted artificial intelligent service such as prediction, active recommendation, voice dialogue, behavior and the like can be achieved by utilizing core technologies such as image sensing, voice signal processing, voice recognition, scene brain and the like. If the target object is in a happy state, the control device can also actively care the target object, and when the target object is in a smoking state, the target object is actively ventilated, so that the aim of improving the air quality in the vehicle is fulfilled. In addition, the common setting items of the target object can be automatically adjusted according to the driving habit of the target object, so that the user experience is improved.
Fig. 5 is a schematic structural diagram of a driving state monitoring device according to an embodiment of the present application, referring to fig. 5, the device includes:
A first obtaining module 501, configured to obtain a face image of a target object for driving a vehicle;
the extraction module 502 is configured to extract a face feature of a target object based on the face image, and send the face feature to the cloud server; the cloud server is used for determining a target account of a target object based on the face characteristics and acquiring account data of the target account;
a receiving module 503, configured to receive account data of a target account sent by a cloud server;
a first determining module 504, configured to determine an eye-closing frequency and an eye-closing duration of the target object within a first preset duration based on the face image;
a second determining module 505, configured to determine a fatigue state of the target object based on the eye closing frequency and the eye closing duration;
the second obtaining module 506 is configured to obtain, when the target object has fatigue driving, a first correspondence between the fatigue level and the reminding mode from account data of the target account;
the third determining module 507 is configured to determine, based on the fatigue level corresponding to the fatigue state, a reminding manner corresponding to the fatigue level from the first correspondence;
the reminding module 508 is used for reminding the target object based on the reminding mode.
In a possible implementation manner, the second determining module 505 is configured to determine that the target object is subjected to fatigue driving and the fatigue level is the first level when the eye closing frequency reaches the first frequency and the eye closing time length is greater than the first time length and is less than the second time length; when the eye closing frequency reaches a second frequency and the eye closing time length is longer than the second time length and is shorter than the third time length, determining that fatigue driving occurs on the target object and the fatigue grade is a second grade; when the eye closing frequency reaches a third frequency and the eye closing time is longer than a third time, determining that the target object is in fatigue driving and the fatigue grade is a third grade; wherein the third frequency is less than the second frequency, which is less than the first frequency.
In another possible implementation, the apparatus further includes:
a fourth determining module, configured to determine a gaze area of a sight line of the target object based on the face image;
a fifth determining module, configured to determine that the target object is in a distracted state when the gaze area does not match the preset area;
the third acquisition module is used for acquiring a second corresponding relation between the abnormal state and the reminding mode from account data of the target account;
the fourth acquisition module is used for acquiring a first reminding mode corresponding to the distraction state from the second corresponding relation;
the reminding module 508 is further configured to remind the target object based on the first reminding mode.
In another possible implementation, the apparatus further includes:
the first detection module is used for detecting the hand state of the target object based on the face image;
a sixth determining module, configured to determine a distance between a hand and an ear of the target object when the hand of the target object is detected to hold the electronic device;
a seventh determining module, configured to determine that the target object is in a call state when the distance is smaller than a preset distance;
a fifth obtaining module, configured to obtain a second alert mode corresponding to the call state from the second correspondence;
The reminding module 508 is further configured to remind the target object based on the second reminding mode.
In another possible implementation, the apparatus further includes:
the second detection module is used for detecting the mouth state of the target object under the condition that the hand of the target object is detected to hold the fuming object;
an eighth determining module, configured to determine that the target object is in a smoking state when it is detected that the smoking object is contained in the mouth of the target object;
the eighth determining module is used for obtaining the smoke concentration in the vehicle and determining the environment where the vehicle is located under the condition that the smoke concentration is larger than the preset concentration;
a ninth determining module, configured to determine a ventilation mode based on an environment in which the vehicle is located;
and the ventilation module is used for ventilating based on a ventilation mode.
In another possible implementation, the apparatus further includes:
a tenth determining module, configured to determine an emotion type of the target object based on the face feature;
the sixth acquisition module is used for acquiring the identity information of the target object from the account data of the target account under the condition that the emotion type of the target object is positive emotion;
and the recommending module is used for broadcasting voice messages or recommending music for the target object based on the positive emotion and the identity information.
In another possible implementation, the apparatus further includes:
a seventh obtaining module, configured to obtain, from account data of the target account, position information of the main driving seat and position information of the rearview mirror;
and the adjusting module is used for adjusting the position of the main driving seat and the position of the rearview mirror based on the position information of the main driving seat and the position information of the rearview mirror.
The embodiment of the application provides a driving state monitoring device, which extracts the face characteristics of a target object from the face image of the target object, sends the face characteristics to a cloud server, determines a target account corresponding to the face characteristics through the cloud server, and acquires account data of the target account. And the fatigue state of the target object is determined according to the face image, and when the target object is in fatigue driving, a reminding mode corresponding to the fatigue grade is acquired from the account data of the target account, and the target object is reminded by the reminding mode, so that safe driving of a user is ensured, and the occurrence rate of traffic accidents is reduced.
As shown in fig. 6, the control device 600 may include a processor (central processing units, CPU) 601 and a memory 602, where the memory 602 stores at least one program code, and the at least one program code is loaded and executed by the processor 601 to implement the driving state monitoring method in the above embodiment. Of course, the control device 600 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, there is also provided a computer readable storage medium storing at least one program code loaded and executed by a processor to implement the driving state monitoring method in the above embodiment.
In an exemplary embodiment, there is also provided a computer program product storing at least one program code loaded and executed by a processor to implement the driving state monitoring method in the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description is only for the convenience of those skilled in the art to understand the technical solution of the present application, and is not intended to limit the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A driving state monitoring method, characterized in that the method comprises:
acquiring a face image of a target object of driving a vehicle;
based on the face image, extracting face features of the target object, and sending the face features to a cloud server; the cloud server is used for determining a target account of the target object based on the face characteristics and acquiring account data of the target account;
receiving account data of the target account sent by the cloud server;
based on the face image, determining the eye closing frequency and the eye closing time of the target object in a first preset time period;
determining a fatigue state of the target object based on the eye closing frequency and the eye closing duration;
under the condition that fatigue driving occurs to the target object, acquiring a first corresponding relation between a fatigue grade and a reminding mode from account data of the target account;
determining a reminding mode corresponding to the fatigue level from the first corresponding relation based on the fatigue level corresponding to the fatigue state;
and prompting the target object based on the prompting mode.
2. The method of claim 1, wherein the determining the fatigue status of the target subject based on the eye closing frequency and the eye closing duration comprises:
When the eye closing frequency reaches a first frequency and the eye closing time length is longer than the first time length and shorter than the second time length, determining that fatigue driving occurs on the target object and the fatigue grade is a first grade;
when the eye closing frequency reaches a second frequency and the eye closing time length is longer than the second time length and is shorter than a third time length, determining that fatigue driving occurs on the target object and the fatigue grade is a second grade;
when the eye closing frequency reaches a third frequency and the eye closing time period is longer than the third time period, determining that fatigue driving occurs on the target object and the fatigue grade is a third grade; wherein the third frequency is less than the second frequency, which is less than the first frequency.
3. The method according to claim 1, wherein the method further comprises:
determining a fixation area of the sight of the target object based on the face image;
under the condition that the gazing area is not matched with a preset area, determining that the target object is in a distraction state;
acquiring a second corresponding relation between the abnormal state and the reminding mode from account data of the target account;
acquiring a first reminding mode corresponding to the distraction state from the second corresponding relation;
And reminding the target object based on the first reminding mode.
4. A method according to claim 3, characterized in that the method further comprises:
detecting a hand state of the target object based on the face image;
determining a distance between the hand and the ear of the target object under the condition that the hand of the target object is detected to hold the electronic equipment;
under the condition that the distance is smaller than a preset distance, determining that the target object is in a call state;
acquiring a second reminding mode corresponding to the call state from the second corresponding relation;
and reminding the target object based on the second reminding mode.
5. The method according to claim 4, wherein the method further comprises:
detecting a mouth state of the target object under the condition that the hand of the target object is detected to hold the fuming object;
determining that the target object is in a smoking state if the fuming object is detected to be contained in the mouth of the target object;
acquiring the smoke concentration in the vehicle, and determining the environment of the vehicle under the condition that the smoke concentration is larger than a preset concentration;
Determining a ventilation mode based on the environment of the vehicle;
and ventilation is performed based on the ventilation mode.
6. The method according to claim 1, wherein the method further comprises:
determining the emotion type of the target object based on the face features;
acquiring identity information of the target object from account data of the target account under the condition that the emotion type of the target object is positive emotion;
and broadcasting a voice message or recommending music for the target object based on the positive emotion and the identity information.
7. The method according to claim 1, wherein the method further comprises:
acquiring position information of a main driving seat and position information of a rearview mirror from account data of the target account;
based on the position information of the main driving seat and the position information of the rearview mirror, the position of the main driving seat and the position of the rearview mirror are adjusted.
8. A driving state monitoring device, characterized in that the device comprises:
the first acquisition module is used for acquiring a face image of a target object of driving the vehicle;
the extraction module is used for extracting the face characteristics of the target object based on the face image and sending the face characteristics to a cloud server; the cloud server is used for determining a target account of the target object based on the face characteristics and acquiring account data of the target account;
The receiving module is used for receiving the account data of the target account sent by the cloud server;
the first determining module is used for determining the eye closing frequency and the eye closing time of the target object in a first preset time based on the face image;
a second determining module for determining a fatigue state of the target object based on the eye closing frequency and the eye closing duration;
the second acquisition module is used for acquiring a first corresponding relation between the fatigue grade and the reminding mode from the account data of the target account under the condition that the target object is in fatigue driving;
the third determining module is used for determining a reminding mode corresponding to the fatigue level from the first corresponding relation based on the fatigue level corresponding to the fatigue state;
and the reminding module is used for reminding the target object based on the reminding mode.
9. A control apparatus, characterized in that it comprises a processor and a memory in which at least one program code is stored, which is loaded and executed by the processor to implement the driving state monitoring method according to any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the driving condition monitoring method according to any one of claims 1 to 7.
CN202310789333.6A 2023-06-29 2023-06-29 Driving state monitoring method, device, equipment and storage medium Pending CN116985819A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310789333.6A CN116985819A (en) 2023-06-29 2023-06-29 Driving state monitoring method, device, equipment and storage medium
PCT/CN2024/100465 WO2025001968A1 (en) 2023-06-29 2024-06-20 Driving state monitoring method and apparatus, device, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310789333.6A CN116985819A (en) 2023-06-29 2023-06-29 Driving state monitoring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116985819A true CN116985819A (en) 2023-11-03

Family

ID=88531049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310789333.6A Pending CN116985819A (en) 2023-06-29 2023-06-29 Driving state monitoring method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN116985819A (en)
WO (1) WO2025001968A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025001968A1 (en) * 2023-06-29 2025-01-02 奇瑞汽车股份有限公司 Driving state monitoring method and apparatus, device, storage medium and product
WO2026001374A1 (en) * 2024-06-28 2026-01-02 比亚迪股份有限公司 Vehicle warning method, computer-readable storage medium, electronic device, and vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8930269B2 (en) * 2012-12-17 2015-01-06 State Farm Mutual Automobile Insurance Company System and method to adjust insurance rate based on real-time data about potential vehicle operator impairment
CN111950398A (en) * 2020-07-27 2020-11-17 上海仙豆智能机器人有限公司 A kind of fatigue driving processing method, device and computer storage medium
CN114987500A (en) * 2022-05-31 2022-09-02 深圳市航盛电子股份有限公司 Driver state monitoring method, terminal device and storage medium
CN115366907B (en) * 2022-08-12 2024-10-22 重庆长安汽车股份有限公司 Method and device for reminding abnormal state of driver, vehicle and storage medium
CN116985819A (en) * 2023-06-29 2023-11-03 奇瑞汽车股份有限公司 Driving state monitoring method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025001968A1 (en) * 2023-06-29 2025-01-02 奇瑞汽车股份有限公司 Driving state monitoring method and apparatus, device, storage medium and product
WO2026001374A1 (en) * 2024-06-28 2026-01-02 比亚迪股份有限公司 Vehicle warning method, computer-readable storage medium, electronic device, and vehicle

Also Published As

Publication number Publication date
WO2025001968A1 (en) 2025-01-02

Similar Documents

Publication Publication Date Title
TWI741512B (en) Method, device and electronic equipment for monitoring driver's attention
EP3067827B1 (en) Driver distraction detection system
US20240362931A1 (en) Systems and methods for determining driver control over a vehicle
JP7299840B2 (en) Information processing device and information processing method
US20220203996A1 (en) Systems and methods to limit operating a mobile phone while driving
CN105894733B (en) Driver's monitoring system
KR20200030049A (en) Vehicle control device and vehicle control method
CN116985819A (en) Driving state monitoring method, device, equipment and storage medium
CN112568904B (en) Vehicle interaction method and device, computer equipment and storage medium
CN114834457B (en) Method, device, equipment and storage medium for detecting driver state
CN114821966A (en) Fatigue driving early warning method, device, terminal and fatigue driving early warning system
JP2019131096A (en) Vehicle control supporting system and vehicle control supporting device
KR102641717B1 (en) Dementia patient integrated management and dementia judgment system
CN112437246B (en) Video conference method based on intelligent cabin and intelligent cabin
KR20190012504A (en) Terminal
CN116767255B (en) Intelligent cabin linkage method and system for new energy automobile
CN110585696A (en) Method, system and control platform for displaying vehicle-mounted virtual reality content
JP4586443B2 (en) Information provision device
CN114360241A (en) A vehicle interaction method, vehicle interaction device and storage medium
KR20220005290A (en) In-Cabin Security Sensor and Platform Service Method therefor
CN115139900B (en) Information reminding method, electronic equipment and storage medium
CN120687061B (en) Interaction method, interaction device and vehicle
CN119898192B (en) Head-up display control method, device, electronic device and storage medium
US20250229797A1 (en) Vehicular driver monitoring system with driver interaction
CN121425262A (en) Vehicle risk prompting method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination