[go: up one dir, main page]

WO2022111784A1 - Device for and method of driving supervision - Google Patents

Device for and method of driving supervision Download PDF

Info

Publication number
WO2022111784A1
WO2022111784A1 PCT/EP2020/025534 EP2020025534W WO2022111784A1 WO 2022111784 A1 WO2022111784 A1 WO 2022111784A1 EP 2020025534 W EP2020025534 W EP 2020025534W WO 2022111784 A1 WO2022111784 A1 WO 2022111784A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving
vehicle
sensor data
reward
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2020/025534
Other languages
French (fr)
Inventor
Rares BARBANTAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dr Ing HCF Porsche AG
Original Assignee
Dr Ing HCF Porsche AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dr Ing HCF Porsche AG filed Critical Dr Ing HCF Porsche AG
Priority to DE112020007528.1T priority Critical patent/DE112020007528T5/en
Priority to PCT/EP2020/025534 priority patent/WO2022111784A1/en
Publication of WO2022111784A1 publication Critical patent/WO2022111784A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/181Preparing for stopping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18154Approaching an intersection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0018Planning or execution of driving tasks specially adapted for safety by employing degraded modes, e.g. reducing speed, in response to suboptimal conditions
    • B60W60/00184Planning or execution of driving tasks specially adapted for safety by employing degraded modes, e.g. reducing speed, in response to suboptimal conditions related to infrastructure
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way

Definitions

  • the invention concerns a device for and method of driving supervision.
  • US 20180060970 A1 discloses a driver assistance system for collision mitigation by analyzing driving behavior with regard to critical traffic situations, of which the driver may be warned.
  • US 20170166217 A1 and WO 2014029882 A1 disclose aspects of an adaptation of corresponding warnings according to different vehicle types, driving modes and styles.
  • the method and device according to the independent claims further improve the driving supervision.
  • the method of driving supervision comprises receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.
  • the method comprises determining at least one predicted trajectory of a traffic participant from the sensor data, wherein the reference is determined depending on the at least one predicted trajectory.
  • the reward is determined depending on an artificial intelligence model, wherein the artificial intelligence model is trained on different driving styles from recorded driving of in particular professional drivers.
  • the artificial intelligence model may be trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecological, the method further comprising determining the reference for driving depending on the driving style.
  • the method comprises outputting the reward on a graphical user interface of the vehicle or to a social media interface.
  • the method comprises determining for different driving situations a plurality of differences between different driving trajectories and respective references, and determining the reward depending on the plurality of differences.
  • the method comprises determining a goal depending on the vehicle type, the driving mode for the vehicle or the driving style, determining if the difference meets the goal, providing the reward if the goal is met and not providing the reward otherwise.
  • the method may comprise determining the driving mode selected by a driver of the vehicle via a user interface.
  • the Device for driving supervision comprises means configured to receive sensor data, means configured to estimate a driving trajectory of a vehicle and means configured to analyze the sensor data, wherein the means configured to receive sensor data, the means configured to estimate the driving trajectory of the vehicle, and the means configured to analyze the sensor data and the driving trajectory are configured to cooperate for performing steps of the method.
  • Fig. 1 schematically depicts a device for driving supervision
  • Fig. 2 depicts steps in a method for driving supervision.
  • Figure 1 depicts a device 100 for driving supervision.
  • the device 100 in the example is connectable to or comprises at least one first sensor 102, at least one second sensor 104 and at least one third sensor 106.
  • the at least one first sensor 102 in the example is a camera.
  • the at least one second sensor 104 in the example is a radar sensor.
  • the at least one third sensor 106 in the example is a LIDAR sensor. Other sensors may be used. There may be more or less than three sensors.
  • the device 100 comprises a first module 108 configured to receive sensor data, a second module 110 configured to estimate a driving path of a vehicle, a third module 112 configured to supervise driving and a fourth module 114 configured to analyze the sensor data and the driving path.
  • the device 100 may be mountable to the vehicle.
  • the device 100 may be a controller for the vehicle.
  • the means of the device 100 may be distributed throughout various controllers mounted to the vehicle and configured to communicate among one another.
  • These modules are configured to cooperate according to method described below for receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.
  • the first module 108 is configured to receive input from the at least one first sensor 102, e.g. the camera.
  • the first module 108 is further configured to receive input from the at least one second sensor 104, e.g. the radar sensor sensors.
  • the first module 108 is further configured to receive input from the at least one third sensor 106, e.g. the LIDAR sensor.
  • the first module 108 may be configured to determine from the sensor data of different sensors fused sensor data.
  • the first module 108 may be configured to determine an object list comprising one object or several objects detected in sensor data received by the sensors.
  • the first module 108 may be configured to determine separate object lists for different sensors.
  • the first module 108 may be configured to provide a first input for the second module 110 and to provide a second input for the third module 112.
  • the first input may be fused sensor data.
  • the second input may comprise the object list or the separate objects lists and may include further characteristics of the fused sensor data.
  • the second module 110 is configured to receive further sensor data, e.g. steering wheel angle, acceleration or other vehicle parameters.
  • the second module 110 is configured to estimate a future driving path based on the further sensor data, e.g. the steering wheel angle, the acceleration or the other vehicle parameters.
  • the second module 110 may comprise a model for estimating the future driving path.
  • the model may be an artificial intelligence based model. This artificial intelligence based model may be trained to predict estimations for future driving path based on the further sensor data.
  • the first input from the first module 108 may be an input, in particular to this artificial intelligence based model, to estimate the future driving path as well.
  • the second module 110 may be configured to use map data to determine road data and to estimate the future driving path depending on the road data.
  • the second module 110 in particular the model, may be parameterized based on a vehicle type and/or a selected driving mode.
  • the driving mode may be a normal mode, a sport mode or an ecological mode.
  • the future driving path determined by the second module 110 is provided to the third module 112.
  • the third module 112 is configured to determine a predicted path for at least one traffic participant.
  • the third module 112 is configured to determine the predicted path based on the second input, i.e. the object list or the object lists from the sensors. In the example, a plurality of predicted paths is determined for the driving path of the traffic participants, i.e. the object or the objects, detected in the sensor data.
  • the third module 112 is configured in the example to determine for any detected object a predicted trajectory with corresponding uncertainty.
  • the third module 112 may be configured to determine based on the future driving path of the vehicle a future trajectory of the vehicle.
  • the third module 112 is configured to evaluate the predicted trajectory of at least one object and the future trajectory of the vehicle to determine a parameter indicating a safety of the driving.
  • the evaluation may consider a potential future collision when the future trajectory of the vehicle and the predicted trajectory of at least one object cross one another.
  • the evaluation may determine the parameter to indicate a high risk of a collision in that case.
  • the evaluation may determine the parameter to indicate a risk of collision in case a distance between these trajectories is less than a threshold without crossing one another.
  • a high acceleration or high speed of the at least one object or the vehicle may increase the risk compared to a lower acceleration or speed.
  • the parameter may be determined to indicate a higher level of the risk of the collision at a higher acceleration or speed than when the acceleration or speed is lower.
  • a stability of the vehicle may be determined and the level of the risk may be adjusted to a higher level when the vehicle is in an instable driving situation than when the vehicle is in a stable driving situation.
  • the parameter may define the risk level based on the vehicle type and/or the selected driving mode as well.
  • the sport mode may lead to the parameter indicating a higher risk level or a lower risk level than the normal mode or the ecological mode.
  • the parameter may define three levels of the risk, namely from high to low: Imminent, Critical and Standard.
  • the third module 112 may be configured to signal the criticality of the future estimated behavior to a driver e.g. on the three levels: Imminent, Critical and Standard.
  • the third module 112 may be configured to output data to the fourth module 114.
  • the data may be the predicted trajectory of at least one object and the future trajectory of the vehicle and/or the parameter indicating a safety of the driving.
  • the fourth module 114 is configured to analyze the predicted trajectory of at least one object and the future trajectory of the vehicle.
  • the fourth module 114 in the example is configured to determine and display a safety analysis for a scenario the vehicle is driving in.
  • the fourth module 114 may be adapted to show a history of driving for individual drivers of the vehicle.
  • the history may be a scenario based safety analysis or show an evolution of the safety analysis for the individual driver.
  • the fourth module 114 may be configured determine the safety analysis by means of an artificial intelligence.
  • the artificial intelligence may be trained to predict goal achievements.
  • the fourth module 114 may be configured to determine a data analysis for vehicle malfunction possibilities as well.
  • the fourth module 114 may be configured to analyze the predicted trajectory of at least one object and the future trajectory of the vehicle and the at least one parameter indicating a safety of the driving.
  • the fourth module 114 is configured in one aspect to create a safe driving rating e.g. for an individual driver.
  • the fourth module 114 may be configured to upload the rating to a cloud based tool for learning and estimating unsafe behaviors.
  • the fourth module 114 is in another aspect configured to use an external application or external applications for simulating different behaviors.
  • the fourth module 114 is in another aspect configured to use an external application or external applications for presenting safe and unsafe situations.
  • the fourth module 114 may be configured to implement a gamification concept for rewards. For example, a reward is given, for driving the vehicle by a driver for 100 km without warnings. For example, a reward is given, for perfect overtaking.
  • the reward may include a badge that is displayed to the driver via a graphical user interface of the vehicle.
  • the artificial intelligence may be used for the prediction and/or the analysis of the data as follows.
  • An artificial intelligence model may be trained on generic driver behavior to accurately predict how other participants in traffic will behave.
  • a short history of a vehicle’s movement together with current dynamic attributes such as speed, acceleration, yaw, yaw rate, may be input to the artificial intelligence model to predict the vehicle's future trajectory.
  • a sequential model may be used, e.g. a long short-term memory, LSTM.
  • An artificial intelligence model may be trained on different driving styles from recorded professional drivers.
  • the model may be used to classify the driving style of the driver, e.g. sporty, aggressive, safe, ecological.
  • the model may output a “driving style” difference to assess how close a driver gets to a desired driving style.
  • the model may output a “rating”.
  • the Fourth module 114 may be adapted to use this “rating” to determine the reward or to suggest improvements for the driving.
  • an improvement is a recommendation for a driver of the vehicle to increase a distance to another vehicle in front, to start braking sooner in tight corners.
  • the model may be trained on famous drivers.
  • the model may output a suggestion or recommendation to achieve a most similar driving style to the famous driver.
  • the model may output the reward depending on the difference to the driving style of the famous driver, e.g. giving a higher reward the closer the driving style is imitated.
  • the output may be provided for sharing on social media.
  • a different reward may be determined depending on the vehicle type, the selected driving mode or the driving style.
  • a step 202 sensor data from at least one sensor of the vehicle is provided.
  • the sensor data may be captured while the vehicle moves in a drive cycle.
  • the driving trajectory of the vehicle is determined.
  • the driving trajectory is determined in the example from the sensor data as described above.
  • the at least one predicted trajectory of the traffic participant may be determined from the sensor data as well.
  • the reference for driving according to the vehicle type, the driving mode for the vehicle or the driving style is provided as described above.
  • the reference may be determined depending on the at least one predicted trajectory as well.
  • the reference may be determined by the artificial intelligence model.
  • the artificial intelligence model may be trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecological. In this case, the reference for driving may be determined depending on the driving style that the artificial intelligence model classified the sensor data into.
  • the reward is determined depending on the difference between the driving trajectory and the reference as described above.
  • the reward is determined depending on the artificial intelligence model.
  • the artificial intelligence model may be trained on different driving styles from recorded driving of in particular professional drivers.
  • a plurality of differences between different driving trajectories and respective references may be determined. This may be within the driving cycle or for the same driver in different driving cycles for the vehicle as well.
  • the reward is in this case determined depending on the plurality of differences, e.g. by summing up individual rewards determined for the different driving situations.
  • a goal may be provided depending on the vehicle type, the driving mode for the vehicle or the driving style.
  • the reward may be provided, if the difference meets the goal. In one aspect, the reward is not provided, if the goal is not met.
  • a step 210 the reward is output, e.g. on a graphical user interface of the vehicle or to a social media interface.
  • the driving rating may be determined and uploaded to the cloud based tool as well.
  • the driving mode may be selectable by the driver in the vehicle via a user interface.
  • the selected driving mode by a driver of the vehicle may be recognized for use in the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Method of and device (100) for driving supervision, wherein the device (100) comprises means (108) configured to receive sensor data, means (110) configured to estimate a driving trajectory of a vehicle and means (114) configured to analyze the sensor data, wherein the means (108) configured to receive sensor data, the means (110) configured to estimate the driving trajectory of the vehicle, and the means (114) configured to analyze the sensor data and the driving path are configured to cooperate for receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.

Description

Device for and method of driving supervision
The invention concerns a device for and method of driving supervision.
US 20180060970 A1 discloses a driver assistance system for collision mitigation by analyzing driving behavior with regard to critical traffic situations, of which the driver may be warned. US 20170166217 A1 and WO 2014029882 A1 disclose aspects of an adaptation of corresponding warnings according to different vehicle types, driving modes and styles.
The method and device according to the independent claims further improve the driving supervision.
The method of driving supervision comprises receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.
Advantageously, the method comprises determining at least one predicted trajectory of a traffic participant from the sensor data, wherein the reference is determined depending on the at least one predicted trajectory.
In one aspect, the reward is determined depending on an artificial intelligence model, wherein the artificial intelligence model is trained on different driving styles from recorded driving of in particular professional drivers.
The artificial intelligence model may be trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecologic, the method further comprising determining the reference for driving depending on the driving style. Advantageously, the method comprises outputting the reward on a graphical user interface of the vehicle or to a social media interface.
Advantageously, the method comprises determining for different driving situations a plurality of differences between different driving trajectories and respective references, and determining the reward depending on the plurality of differences.
Advantageously, the method comprises determining a goal depending on the vehicle type, the driving mode for the vehicle or the driving style, determining if the difference meets the goal, providing the reward if the goal is met and not providing the reward otherwise.
The method may comprise determining the driving mode selected by a driver of the vehicle via a user interface.
The Device for driving supervision comprises means configured to receive sensor data, means configured to estimate a driving trajectory of a vehicle and means configured to analyze the sensor data, wherein the means configured to receive sensor data, the means configured to estimate the driving trajectory of the vehicle, and the means configured to analyze the sensor data and the driving trajectory are configured to cooperate for performing steps of the method.
Further advantageous embodiments are derivable from the following description and the drawing. In the drawing:
Fig. 1 schematically depicts a device for driving supervision,
Fig. 2 depicts steps in a method for driving supervision.
Figure 1 depicts a device 100 for driving supervision. The device 100 in the example is connectable to or comprises at least one first sensor 102, at least one second sensor 104 and at least one third sensor 106. The at least one first sensor 102 in the example is a camera. The at least one second sensor 104 in the example is a radar sensor. The at least one third sensor 106 in the example is a LIDAR sensor. Other sensors may be used. There may be more or less than three sensors.
The device 100 comprises a first module 108 configured to receive sensor data, a second module 110 configured to estimate a driving path of a vehicle, a third module 112 configured to supervise driving and a fourth module 114 configured to analyze the sensor data and the driving path.
The device 100 may be mountable to the vehicle. The device 100 may be a controller for the vehicle. The means of the device 100 may be distributed throughout various controllers mounted to the vehicle and configured to communicate among one another.
These modules are configured to cooperate according to method described below for receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.
The first module 108 is configured to receive input from the at least one first sensor 102, e.g. the camera. The first module 108 is further configured to receive input from the at least one second sensor 104, e.g. the radar sensor sensors.
The first module 108 is further configured to receive input from the at least one third sensor 106, e.g. the LIDAR sensor.
The first module 108 may be configured to determine from the sensor data of different sensors fused sensor data. The first module 108 may be configured to determine an object list comprising one object or several objects detected in sensor data received by the sensors. The first module 108 may be configured to determine separate object lists for different sensors. The first module 108 may be configured to provide a first input for the second module 110 and to provide a second input for the third module 112.
The first input may be fused sensor data. The second input may comprise the object list or the separate objects lists and may include further characteristics of the fused sensor data.
The second module 110 is configured to receive further sensor data, e.g. steering wheel angle, acceleration or other vehicle parameters.
The second module 110 is configured to estimate a future driving path based on the further sensor data, e.g. the steering wheel angle, the acceleration or the other vehicle parameters.
The second module 110 may comprise a model for estimating the future driving path. The model may be an artificial intelligence based model. This artificial intelligence based model may be trained to predict estimations for future driving path based on the further sensor data.
The first input from the first module 108 may be an input, in particular to this artificial intelligence based model, to estimate the future driving path as well.
The second module 110 may be configured to use map data to determine road data and to estimate the future driving path depending on the road data.
The second module 110, in particular the model, may be parameterized based on a vehicle type and/or a selected driving mode. The driving mode may be a normal mode, a sport mode or an ecological mode.
The future driving path determined by the second module 110 is provided to the third module 112. The third module 112 is configured to determine a predicted path for at least one traffic participant. The third module 112 is configured to determine the predicted path based on the second input, i.e. the object list or the object lists from the sensors. In the example, a plurality of predicted paths is determined for the driving path of the traffic participants, i.e. the object or the objects, detected in the sensor data.
The third module 112 is configured in the example to determine for any detected object a predicted trajectory with corresponding uncertainty.
The third module 112 may be configured to determine based on the future driving path of the vehicle a future trajectory of the vehicle.
The third module 112 is configured to evaluate the predicted trajectory of at least one object and the future trajectory of the vehicle to determine a parameter indicating a safety of the driving. The evaluation may consider a potential future collision when the future trajectory of the vehicle and the predicted trajectory of at least one object cross one another. The evaluation may determine the parameter to indicate a high risk of a collision in that case. The evaluation may determine the parameter to indicate a risk of collision in case a distance between these trajectories is less than a threshold without crossing one another. A high acceleration or high speed of the at least one object or the vehicle may increase the risk compared to a lower acceleration or speed. The parameter may be determined to indicate a higher level of the risk of the collision at a higher acceleration or speed than when the acceleration or speed is lower. A stability of the vehicle may be determined and the level of the risk may be adjusted to a higher level when the vehicle is in an instable driving situation than when the vehicle is in a stable driving situation.
The parameter may define the risk level based on the vehicle type and/or the selected driving mode as well. The sport mode may lead to the parameter indicating a higher risk level or a lower risk level than the normal mode or the ecological mode. In the example, the parameter may define three levels of the risk, namely from high to low: Imminent, Critical and Standard.
The third module 112 may be configured to signal the criticality of the future estimated behavior to a driver e.g. on the three levels: Imminent, Critical and Standard.
The third module 112 may be configured to output data to the fourth module 114. The data may be the predicted trajectory of at least one object and the future trajectory of the vehicle and/or the parameter indicating a safety of the driving.
The fourth module 114 is configured to analyze the predicted trajectory of at least one object and the future trajectory of the vehicle. The fourth module 114 in the example is configured to determine and display a safety analysis for a scenario the vehicle is driving in. The fourth module 114 may be adapted to show a history of driving for individual drivers of the vehicle. The history may be a scenario based safety analysis or show an evolution of the safety analysis for the individual driver. The fourth module 114 may be configured determine the safety analysis by means of an artificial intelligence. The artificial intelligence may be trained to predict goal achievements. The fourth module 114 may be configured to determine a data analysis for vehicle malfunction possibilities as well.
The fourth module 114 may be configured to analyze the predicted trajectory of at least one object and the future trajectory of the vehicle and the at least one parameter indicating a safety of the driving.
The fourth module 114 is configured in one aspect to create a safe driving rating e.g. for an individual driver. The fourth module 114 may be configured to upload the rating to a cloud based tool for learning and estimating unsafe behaviors. The fourth module 114 is in another aspect configured to use an external application or external applications for simulating different behaviors. The fourth module 114 is in another aspect configured to use an external application or external applications for presenting safe and unsafe situations.
The fourth module 114 may be configured to implement a gamification concept for rewards. For example, a reward is given, for driving the vehicle by a driver for 100 km without warnings. For example, a reward is given, for perfect overtaking. The reward may include a badge that is displayed to the driver via a graphical user interface of the vehicle.
The artificial intelligence may be used for the prediction and/or the analysis of the data as follows.
For Prediction:
An artificial intelligence model may be trained on generic driver behavior to accurately predict how other participants in traffic will behave.
A short history of a vehicle’s movement together with current dynamic attributes such as speed, acceleration, yaw, yaw rate, may be input to the artificial intelligence model to predict the vehicle's future trajectory. A sequential model may be used, e.g. a long short-term memory, LSTM.
For analytics:
An artificial intelligence model may be trained on different driving styles from recorded professional drivers.
The model may be used to classify the driving style of the driver, e.g. sporty, aggressive, safe, ecologic. The model may output a “driving style” difference to assess how close a driver gets to a desired driving style. The model may output a “rating”.
The Fourth module 114 may be adapted to use this “rating” to determine the reward or to suggest improvements for the driving.
For example, an improvement is a recommendation for a driver of the vehicle to increase a distance to another vehicle in front, to start braking sooner in tight corners.
The model may be trained on famous drivers. The model may output a suggestion or recommendation to achieve a most similar driving style to the famous driver. The model may output the reward depending on the difference to the driving style of the famous driver, e.g. giving a higher reward the closer the driving style is imitated. The output may be provided for sharing on social media.
A different reward may be determined depending on the vehicle type, the selected driving mode or the driving style.
The method of driving supervision is described with reference to figure 2 below.
In a step 202, sensor data from at least one sensor of the vehicle is provided.
The sensor data may be captured while the vehicle moves in a drive cycle.
In a step 204, the driving trajectory of the vehicle is determined. The driving trajectory is determined in the example from the sensor data as described above. The at least one predicted trajectory of the traffic participant may be determined from the sensor data as well.
In a step 206, the reference for driving according to the vehicle type, the driving mode for the vehicle or the driving style is provided as described above. The reference may be determined depending on the at least one predicted trajectory as well. The reference may be determined by the artificial intelligence model. The artificial intelligence model may be trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecologic. In this case, the reference for driving may be determined depending on the driving style that the artificial intelligence model classified the sensor data into.
In a step 208, the reward is determined depending on the difference between the driving trajectory and the reference as described above. In the example, the reward is determined depending on the artificial intelligence model.
The artificial intelligence model may be trained on different driving styles from recorded driving of in particular professional drivers.
For different driving situations, a plurality of differences between different driving trajectories and respective references may be determined. This may be within the driving cycle or for the same driver in different driving cycles for the vehicle as well. The reward is in this case determined depending on the plurality of differences, e.g. by summing up individual rewards determined for the different driving situations.
A goal may be provided depending on the vehicle type, the driving mode for the vehicle or the driving style. In this case, the reward may be provided, if the difference meets the goal. In one aspect, the reward is not provided, if the goal is not met.
In a step 210 the reward is output, e.g. on a graphical user interface of the vehicle or to a social media interface. The driving rating may be determined and uploaded to the cloud based tool as well.
The driving mode may be selectable by the driver in the vehicle via a user interface. In this case, the selected driving mode by a driver of the vehicle may be recognized for use in the method.

Claims

Claims
1. A method of driving supervision, characterized by receiving (202) sensor data from at least one sensor of a vehicle, determining (204) a driving trajectory of the vehicle depending on the sensor data, providing (206) a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining (208) a reward depending on a difference between the driving trajectory and the reference, and outputting (210) the reward.
2. The method according to claim 1, characterized by determining (204) at least one predicted trajectory of a traffic participant from the sensor data, wherein the reference is determined (206) depending on the at least one predicted trajectory.
3. The method according to one of the previous claims, characterized in that the reward is determined (208) depending on an artificial intelligence model, wherein the artificial intelligence model is trained on different driving styles from recorded driving of in particular professional drivers.
4. The method according to claim 3, characterized in that the artificial intelligence model is trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecologic, the method further comprising determining the reference for driving depending on the driving style.
5. The method according to one of the previous claims, characterized by outputting (210) the reward on a graphical user interface of the vehicle or to a social media interface.
6. The method according to one of the previous claims, characterized by determining (208) for different driving situations a plurality of differences between different driving trajectories and respective references, and determining (208) the reward depending on the plurality of differences.
7. The method according to one of the previous claims, characterized by determining a goal depending on the vehicle type, the driving mode for the vehicle or the driving style, determining if the difference meets the goal, providing the reward if the goal is met and not providing the reward otherwise.
8. The method according to one of the previous claims, characterized by determining the driving mode selected by a driver of the vehicle via a user interface.
9. Device (100) for driving supervision, characterized in that the device (100) comprises means (108) configured to receive sensor data, means (110) configured to estimate a driving trajectory of a vehicle and means (114) configured to analyze the sensor data, wherein the means (108) configured to receive sensor data, the means (110) configured to estimate the driving path of the vehicle, and the means (114) configured to analyze the sensor data and the driving trajectory are configured to cooperate for performing steps of the method according to one of the preceding claims.
PCT/EP2020/025534 2020-11-24 2020-11-24 Device for and method of driving supervision Ceased WO2022111784A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112020007528.1T DE112020007528T5 (en) 2020-11-24 2020-11-24 Device and method for driving monitoring
PCT/EP2020/025534 WO2022111784A1 (en) 2020-11-24 2020-11-24 Device for and method of driving supervision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/025534 WO2022111784A1 (en) 2020-11-24 2020-11-24 Device for and method of driving supervision

Publications (1)

Publication Number Publication Date
WO2022111784A1 true WO2022111784A1 (en) 2022-06-02

Family

ID=73694956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/025534 Ceased WO2022111784A1 (en) 2020-11-24 2020-11-24 Device for and method of driving supervision

Country Status (2)

Country Link
DE (1) DE112020007528T5 (en)
WO (1) WO2022111784A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100023197A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on behavioral diagnosis
WO2014029882A1 (en) 2012-08-24 2014-02-27 Continental Teves Ag & Co. Ohg Method and system for promoting a uniform driving style
US20140322676A1 (en) * 2013-04-26 2014-10-30 Verizon Patent And Licensing Inc. Method and system for providing driving quality feedback and automotive support
US20170166217A1 (en) 2015-12-15 2017-06-15 Octo Telematics Spa Systems and methods for controlling sensor-based data acquisition and signal processing in vehicles
US20180060970A1 (en) 2016-09-01 2018-03-01 International Business Machines Corporation System and method for context-based driver monitoring
FR3074123A1 (en) * 2018-05-29 2019-05-31 Continental Automotive France EVALUATING A DRIVING STYLE OF A DRIVER OF A ROAD VEHICLE IN MOTION BY AUTOMATIC LEARNING
US20190263417A1 (en) * 2018-02-28 2019-08-29 CaIAmp Corp. Systems and methods for driver scoring with machine learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100023197A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on behavioral diagnosis
WO2014029882A1 (en) 2012-08-24 2014-02-27 Continental Teves Ag & Co. Ohg Method and system for promoting a uniform driving style
US20140322676A1 (en) * 2013-04-26 2014-10-30 Verizon Patent And Licensing Inc. Method and system for providing driving quality feedback and automotive support
US20170166217A1 (en) 2015-12-15 2017-06-15 Octo Telematics Spa Systems and methods for controlling sensor-based data acquisition and signal processing in vehicles
US20180060970A1 (en) 2016-09-01 2018-03-01 International Business Machines Corporation System and method for context-based driver monitoring
US20190263417A1 (en) * 2018-02-28 2019-08-29 CaIAmp Corp. Systems and methods for driver scoring with machine learning
FR3074123A1 (en) * 2018-05-29 2019-05-31 Continental Automotive France EVALUATING A DRIVING STYLE OF A DRIVER OF A ROAD VEHICLE IN MOTION BY AUTOMATIC LEARNING

Also Published As

Publication number Publication date
DE112020007528T5 (en) 2023-08-10

Similar Documents

Publication Publication Date Title
CN110843789B (en) Vehicle lane change intention prediction method based on time sequence convolution network
EP3579211B1 (en) Method and vehicle for assisting an operator of an ego-vehicle in controlling the ego-vehicle by determining a future behavior and an associated trajectory for the ego-vehicle
CN112781887B (en) Method, device and system for testing vehicle performance
CN113168570B (en) Method, computer program product and motor vehicle for training at least one algorithm for a control device of a motor vehicle
US12005922B2 (en) Toward simulation of driver behavior in driving automation
CN106428009B (en) System and method for vehicle trajectory determination
CN111332283B (en) Method and system for controlling a motor vehicle
US9104965B2 (en) Vehicle with computing means for monitoring and predicting traffic participant objects
Klingelschmitt et al. Combining behavior and situation information for reliably estimating multiple intentions
CN114730186B (en) Methods for operating autonomous driving functions of vehicles
EP4003803B1 (en) Customization of autonomous-driving lane changes of motor vehicles based on drivers' driving behaviours
Damerow et al. Risk-aversive behavior planning under multiple situations with uncertainty
US12097892B2 (en) System and method for providing an RNN-based human trust model
Talebpour et al. Multiregime sequential risk-taking model of car-following behavior: specification, calibration, and sensitivity analysis
CN114667545A (en) Method for training at least one algorithm for a control unit of a motor vehicle, computer program product and motor vehicle
CN110546056A (en) Method and control device for determining a control scheme for a vehicle
CN117990118A (en) Lane change recommendation system, method and computer readable recording medium
CN118494483A (en) Control method and device for vehicle, electronic equipment and storage medium
US20230128456A1 (en) Adaptive trust calibration
CN115176297A (en) Method for training at least one algorithm for a control unit of a motor vehicle, computer program product and motor vehicle
CN115140029B (en) A method and device for detecting safety capability of an autonomous driving vehicle
CN117842092A (en) Trajectory planning system to ensure autonomous vehicles can maneuver around moving obstacles
WO2022111784A1 (en) Device for and method of driving supervision
CN115136081A (en) Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle
US20210150309A1 (en) Vehicle operation labeling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20817220

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20817220

Country of ref document: EP

Kind code of ref document: A1