[go: up one dir, main page]

WO2021124140A1 - System and method for monitoring cognitive load of a driver of a vehicle - Google Patents

System and method for monitoring cognitive load of a driver of a vehicle Download PDF

Info

Publication number
WO2021124140A1
WO2021124140A1 PCT/IB2020/062016 IB2020062016W WO2021124140A1 WO 2021124140 A1 WO2021124140 A1 WO 2021124140A1 IB 2020062016 W IB2020062016 W IB 2020062016W WO 2021124140 A1 WO2021124140 A1 WO 2021124140A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
driver
deviation
vehicle
cognitive load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2020/062016
Other languages
French (fr)
Inventor
Gowdham PRABHAKAR
Abhishek MUKHOPADHYAY
Pradipta Biswas
Sachin Deshmukh
Modiksha MADAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Indian Institute of Science IISC
Faurecia India Pvt Ltd
Original Assignee
Indian Institute of Science IISC
Faurecia India Pvt Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Indian Institute of Science IISC, Faurecia India Pvt Ltd filed Critical Indian Institute of Science IISC
Priority to EP20902881.0A priority Critical patent/EP4076191A4/en
Publication of WO2021124140A1 publication Critical patent/WO2021124140A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6893Cars
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/02Operational features
    • A61B2560/0242Operational features adapted to measure environmental factors, e.g. temperature, pollution
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/02Operational features
    • A61B2560/0266Operational features for monitoring or limiting apparatus function

Definitions

  • This hike is identified by processing the pupil dilation signal for its coefficients of wavelet transform and calculating a metric called Index of Cognitive Activity (ICA).
  • ICA Index of Cognitive Activity
  • Marshall evaluated this method for estimating cognitive load of the participant in Automotive [Marshall 2002] as well as Aviation [Marshall 2007].
  • Marshall has used only mental tasks (questioning the participant to answer vocally) to estimate the cognitive load.
  • researchers have also estimated driver’s cognitive load by investigating variance in saccadic intrusion (SI), change in fixation duration and blink count [Lee 2007, Liang 2014, Palinko 2010, Yoshida 2014].
  • EEG is the most commonly used non-invasive means of monitoring the brain activity.
  • the electrodes of the EEG tracker can be placed on the head such that it makes a contact with the scalp of the head.
  • Several research groups investigate on improving the accuracy of the tracker while there are research groups which exploit its usability in different environments for estimating cognitive load.
  • Several psychological researchers have given strong evidence that the cognitive load is reflected in pupil dilation of eyes. Marshal [Marshall 2007] has discussed a method to estimate the cognitive load by calculating a metric called ICA.
  • the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
  • the one or more parameters comprises LI Norm of spectrum of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median SI velocity.
  • FIG. 5 illustrates an exemplary block diagram representation of process of alert system in accordance with an embodiment of the present disclosure.
  • the system can further include a cognitive engine 102 operatively coupled to the set of sensors 104.
  • the cognitive engine 102 can be configured to determine one or more parameters value from the sensed one or more ocular features; and determine one or more deviation states based on processing of the determined one or more parameters value to enable real-time monitoring of cognitive load of the driver of the vehicle.
  • the one or more parameters value comprises LI Norm of spectrum of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median SI velocity.
  • the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
  • the one or more processor(s) 202 can be configured to fetch and execute computer-readable instructions stored in a memory 204 of the cognitive engine.
  • the memory 204 can store one or more computer-readable instructions or routines, which can be fetched and executed to create or share the data units over a network service.
  • the memory 204 can be any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
  • the deviation state determination engine 214 can be configured to process the determined one or more parameters to facilitate determination of one or more deviation states.
  • the deviation state determination engine 214 can be used for estimating cognitive load of a driver of the vehicle. The cognitive load of the driver of the vehicle can be determined by comparing the one or more parameters with a dataset comprising a set of predefined or preconfigured parameter values.
  • the main or the primary task being driving the vehicle and the secondary tasks can be talking, controlling head-up display (HUD) and the like. i.e. the tasks other than performing the primary task of driving the vehicle.
  • HUD head-up display
  • the proposed method may be described in general context of computer-executable instructions.
  • computer-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types.
  • the method can also be practised in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network.
  • computer-executable instructions may be located in both local and remote computer storage media, including memory storage devices.
  • block 304 pertains to determining, by a processor of a cognitive engine operatively coupled to the set of sensors, one or more parameters value from the sensed one or more ocular features.
  • block 306 pertains to determining, by the processor, one or more deviation states based on processing of the determined one or more parameters value.
  • the cognitive engine could be configured to detect or determine if the driver is performing a secondary task by comparing the sensed attributes with a predefined threshold. If the pre-defined threshold value is breached for any value then it can be perceived that the user is performing secondary task. If the cognitive engine detects that the driver is performing secondary task or perceiving a road hazard, it will lock the secondary tasks from being operated by the driver. For example, in case a call or SMS is received on mobile phone of the driver, the cognitive engine will pop up the notification for the driver if his cognitive load is high than threshold.
  • the HUD will display only important items on the screen instead of regular infotainment icons. The icon size and colours will adapt according to the cognitive state of driver. When his load comes down, it will slowly notify driver about his missed calls and SMS one by one and he will have access to operate other secondary tasks like music player, etc.
  • FIG. 6 illustrates exemplary graphical representation of comparing accuracy of different ocular parameters individually and different machine learning models in terms of estimating cognitive load.
  • SVC classifier using RBF kernel has two parameters, g and C. If we change value of g from low to high, the curve of the decision boundary also changes from low to high. Correspondingly decision region also changes from broad area to small islands around data points. C is the penalty for misclassifying a data point.
  • the present disclosure provides system and method for monitoring cognitive load of a driver of the vehicle that is cost effective and easy to implement.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Educational Technology (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A system and a method for monitoring cognitive load of a driver of a vehicle. The system comprises: a set of sensors (104) for sensing one or more ocular features of the driver; a cognitive engine (102) operatively coupled to the set of sensors (104), the cognitive engine (102) comprising a processor (202) coupled to a memory (204), the memory (204) storing instructions executable by the processor (202) to: determine one or more parameters value from the sensed one or more ocular features; and determine one or more deviation states based on processing of the determined one or more parameters value to enable real-time monitoring of cognitive load of the driver. The system is robust, accurate, fast, efficient, cost-effective and simple.

Description

SYSTEM AND METHOD FOR MONITORING COGNITIVE LOAD OF A DRIVER
OF A VEHICLE
TECHNICAL FIELD
[0001] The present disclosure relates to estimating cognitive load. In particular, the present disclosure pertains to estimating cognitive load using ocular features. More particularly, the present disclosure pertains to systems and methods for monitoring cognitive load of a driver of a vehicle.
BACKDROUND
[0002] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] In recent time, distraction of drivers increases with increase in number of sophisticated interactive systems inside a car. NHTSA [NHTSA 2012] has reported that 17% of car crashes involved distracted drivers and 5% of distraction related crashes involved electronic device. NHTSA has also reported that operation of any secondary task should not take the participants’ eyes-off-road time greater than 2 seconds [NHTSA 2012]. Automating the detection of distraction can be useful for alerting drivers and get them to safe zone. Detecting if the driver is distracted from driving in automotive is challenging as the current technology of distraction detection does not incorporate cognitive load estimation. Estimating cognitive load from physiological parameters is itself a challenging task as we do not know what exactly is happening in one’s brain and we don’t have a probe to detect what exactly is the person thinking in his mind. Researchers have come up with different probes of measuring cognitive load by different means including physiological parameters like eye metrics [Redlich 1908 ; Westphal 1907 ; Marshall 2002; Marshall 2007; Tokuda 2011; Gavas 2017; Duchowski 2018; Fridman 2018], heart rate (skin response) [Healey 2011], acoustic voice features[Boril 2011], affective states [Afzal 2009; Sezgin 2007] and EEG [John 2004]. Researchers [Afzal 2009; Sezgin 2007] investigated on detecting cognitive states by capturing affective states of the drivers. In such cases, it becomes challenging to capture and process the video at different conditions of luminance and exposure inside the car due to which the system fails to detect a set of facial feature points. The facial expressions of each individual fail to correspond to the mapped emotion as everyone express their emotions in different way. Despite the problems of occlusion, lighting and pose variation, researchers have given evidences on affective computing [Zeng 2009] for cognitive state detection. Researchers have also explored areas of eye gaze movements [Yoshida 2014, Tokuda 2011], heart rate or skin response [Healey 2011], acoustic features of voice [Boril 2011] for estimating cognitive state of drivers. The skin response system requires intrusive methods which cause users unnecessary discomfort while driving. Acoustic features can be tracked only when the driver is talking.
[0004] Redlich [Redlich 1908] and Westphal [Westphal 1907] reported a relation between physical task demand and pupil dilation. Hess [Hess 1975] reported that the change in pupil dilation is related to change in the viewing of angles of the photograph. Recent researchers have used a metric to estimate cognitive load by measuring frequency and power of pupil dilation. Gavas [Gavas 2017] as well as Duchowski [Duchowski 2018] have used chin rest for the experiment to control the head movements which makes the system difficult to realize in real-time situations. Researchers [Marshall 2002, Marshall 2007] reported that a hike in pupil dilation corresponds to increase in cognitive load. This hike is identified by processing the pupil dilation signal for its coefficients of wavelet transform and calculating a metric called Index of Cognitive Activity (ICA). Marshall evaluated this method for estimating cognitive load of the participant in Automotive [Marshall 2002] as well as Aviation [Marshall 2007]. Marshall has used only mental tasks (questioning the participant to answer vocally) to estimate the cognitive load. There are not many studies on estimating cognitive load from pupil dilation under varying lighting conditions since the pupil dilation is sensitive to variation in luminance. Researchers have also estimated driver’s cognitive load by investigating variance in saccadic intrusion (SI), change in fixation duration and blink count [Lee 2007, Liang 2014, Palinko 2010, Yoshida 2014]. Toyota [Basir 2004] has a patent for detecting if the driver is looking away from the road by detecting his eyelid movements. Researchers [Prabhakar 2018] have worked on using simple commercial off-the-shelf sensors like eye gaze tracker, Kinect for operating secondary tasks using multimodal interaction. Usage of such sensors for distraction detection could exploit the sensors’ usability both for interaction and distraction detection.
[0005] Human behaviour can be monitored by tracking their hand, head, finger and eye movements. These movements can be tracked using the Commercially-Off-The-Shelf (COTS) sensors like head movement tracker (Microsoft Kinect), Inertial Measurement Unit (IMU), Eye gaze tracker, finger movement trackers (Leap Motion), and so on. By monitoring this behaviour in a car, we can estimate user’s detraction due to eyes off the road or performing any secondary task while driving. But there are situations where the drivers do not take their eyes off the road while driving but their thoughts divert them away from the focus on driving. Such distraction makes the driver physically drive the vehicle but mentally unprepared to face risky situations. Such distractions can be detected or estimated by monitoring brain activity.
[0006] At present, EEG is the most commonly used non-invasive means of monitoring the brain activity. The electrodes of the EEG tracker can be placed on the head such that it makes a contact with the scalp of the head. Several research groups investigate on improving the accuracy of the tracker while there are research groups which exploit its usability in different environments for estimating cognitive load. In situations where humans are situationally impaired to wear an EEG tracker because of the cost and unnecessary discomfort caused by the trackers, we investigate alternate ways to EEG for estimating cognitive load. Several psychological researchers have given strong evidence that the cognitive load is reflected in pupil dilation of eyes. Marshal [Marshall 2007] has discussed a method to estimate the cognitive load by calculating a metric called ICA.
[0007] Researchers worked on estimating cognitive load from pupil dilation. But very few eye gaze trackers are available which can detect pupil diameter which are very expensive too. In order to estimate cognitive load using lower end eye gaze trackers without pupil diameter detection, researchers like Abadi and Tokuda have used SI and microsaccade rate which are calculated using only gaze coordinates from the tracker. [Abadi 2004] Abadi defined some characteristics of monophasic square wave intrusions (MSWI) which is a type of SI with mean amplitude, frequency and duration of 0.7+0.5, 11.5+11.6 per min, and 255+147 ms respectively. [Tokuda 2011] Tokuda estimated mental workload from SI. He reported a strong evidence of increase in velocity of SI with increase in difficulty of task. Tokuda conducted a dual task study with N-back test and free viewing task, but he did not report any metric regarding the performance of free viewing task of participants which might have had an impact on cognition. He also used a very old Tobii tracker which might not be competing with the existing trackers in terms of accuracy of tracking. [Siegenthaler 2014] Siegenthaler found decrease in microsaccade rate with increase in task difficulty. Their study of arithmetic task involved increasing load on working memory. [Gao 2015] Gao reported suppression of microsaccade rate with respect to increase in arithmetic task difficulty for non visual cognitive processing.
[0008] [Dalmaso 2017] Dalmaso reported that microsaccade rate drops with high demand task. [Krejtz 2018] Krzysztof has captured pupil diameter and microsaccades as indicators of cognitive load. He reported a mild evidence of decrease in microsaccade rate with increase in difficulty of task. He also reported a strong evidence of increase in magnitude of microsaccade with increase in difficulty of task. Moreover, these researchers used a chin rest to arrest head movement which limits the application of such technology to be used in real world systems. Biswas [3] designed a study using a driving simulator reported an evidence of detecting the distraction of drivers from velocity of SI, deviation of yaw from Kinect, deviation of yaw from IMU.
[0009] There is therefore a need in the art to provide a system and method for monitoring cognitive load of a driver of the vehicle. In particular what is needed is system and method for monitoring cognitive load of a driver of the vehicle using ocular featuresthat seeks to overcome or at least ameliorate one or more of the above-mentioned problems and other limitations of the existing solutions and utilize techniques, which are robust, accurate, fast, efficient, cost-effective and simple.
OBJECTS OF THE PRESENT DISCLOSURE
[0010] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0011] It is an object of the present disclosure to provide system and method for monitoring cognitive load of a driver of a vehicle.
[0012] It is another object of the present disclosure to provide system and method that can non-invasively and non-contact based determine cognitive load.
[0013] It is another object of the present disclosure to provide system and method for monitoring cognitive load of a driver of a vehicle in real time to help avoid any hazard.
[0014] It is another object of the present disclosure to provide system and method for monitoring cognitive load of a driver of a vehicle that is cost effective and easy to implement. [0015] It is another object of the present disclosure to provide system and method for monitoring cognitive load of a driver of a vehicle that can be configured with a vehicle to help minimize hazards due to negligence of the driver.
SUMMARY
[0016] The present disclosure relates to estimating cognitive load. In particular, the present disclosure pertains to estimating cognitive load using ocular features. More particularly, the present disclosure pertains to systems and methods for monitoring cognitive load of a driver of a vehicle. [0017] An aspect of the present disclosure provides a system for vehicle for monitoring cognitive load of a driver of the vehicle, said system includes: a set of sensors for sensing one or more ocular features of the driver; a cognitive engine operatively coupled to the set of sensors, the cognitive engine comprising a processor coupled to a memory, the memory storing instructions executable by the processor to: determine one or more parameters value from the sensed one or more ocular features; and determine one or more deviation states based on processing of the determined one or more parameters value to enable real-time monitoring of cognitive load of the driver.
[0018] In an aspect, the system comprises an alert generation engine for generating alert signal based on the determined one or more deviation states.
[0019] In an aspect, the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
[0020] In an aspect, the set of sensors comprises any or a combination of an eye gaze tracking and ambient light sensors.
[0021] In an aspect, the one or more ocular features comprises pupil diameters and gaze position.
[0022] In an aspect, wherein the one or more parameters comprises LI Norm of spectrum of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median SI velocity.
[0023] In an aspect, a vehicle comprising the system for monitoring cognitive load of a driver of the vehicle.
[0024] Another aspect of the present disclosure provides a method for monitoring cognitive load of a driver of a vehicle, said method comprising the steps of: sensing, by a set of sensors, one or more ocular features of the driver; determining, by a processor of a cognitive engine operatively coupled to the set of sensors, one or more parameters value from the sensed one or more ocular features; and determining, by the processor, one or more deviation states based on processing of the determined one or more parameters value.
[0025] In an aspect, the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0027] FIG. 1 illustrates an exemplary block diagram representation of the system for monitoring cognitive load in accordance with an embodiment of the present disclosure.
[0028] FIG. 2 illustrates exemplary engine of cognitive engine in accordance with an embodiment of the present disclosure.
[0029] FIG. 3 is a flow diagram for monitoring cognitive load of a driver of the vehicle using his ocular features in accordance with an embodiment of the present disclosure. [0030] FIG. 4 illustrates an exemplary block diagram representation of cognitive load monitoring system in accordance with an embodiment of the present disclosure.
[0031] FIG. 5 illustrates an exemplary block diagram representation of process of alert system in accordance with an embodiment of the present disclosure.
[0032] FIG. 6 illustrates exemplary graphical representation of comparing accuracy of different ocular parameters individually and different machine learning models in terms of estimating cognitive load.
DETAILED DESCRIPTION
[0033] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0034] Various terms as used herein are shown below. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing. [0035] The present disclosure relates to estimating cognitive load. In particular, the present disclosure pertains to estimating cognitive load using ocular features. More particularly, the present disclosure pertains to systems and methods for monitoring cognitive load of a driver of a vehicle.
[0036] An aspect of the present disclosure provides a system for vehicle for monitoring cognitive load of a driver of the vehicle, said system includes: a set of sensors for sensing one or more ocular features of the driver; a cognitive engine operatively coupled to the set of sensors, the cognitive engine comprising a processor coupled to a memory, the memory storing instructions executable by the processor to: determine one or more parameters value from the sensed one or more ocular features; and determine one or more deviation states based on processing of the determined one or more parameters value to enable real-time monitoring of cognitive load of the driver.
[0037] In an aspect, the system comprises an alert generation engine for generating alert signal based on the determined one or more deviation states.
[0038] In an aspect, the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
[0039] In an aspect, the set of sensors comprises any or a combination of an eye gaze tracking and ambient light sensors.
[0040] In an aspect, the one or more ocular features comprises pupil diameters and gaze position.
[0041] In an aspect, wherein the one or more parameters comprises LI Norm of spectrum of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median SI velocity.
[0042] In an aspect, a vehicle comprising the system for monitoring cognitive load of a driver of the vehicle.
[0043] Another aspect of the present disclosure provides a method for monitoring cognitive load of a driver of a vehicle, said method comprising the steps of: sensing, by a set of sensors, one or more ocular features of the driver; determining, by a processor of a cognitive engine operatively coupled to the set of sensors, one or more parameters value from the sensed one or more ocular features; and determining, by the processor, one or more deviation states based on processing of the determined one or more parameters value. [0044] In an aspect, the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
[0045] FIG. 1 illustrates an exemplary block diagram representation of the system for monitoring cognitive load in accordance with an embodiment of the present disclosure.
[0046] In an embodiment, system for vehicle for monitoring cognitive load of a driver of the vehicle using ocular features can include a set of sensors 104. The set of sensors 104 can be configured to determine one or more ocular features of a driver of the vehicle. The set of sensors can include but not limited to of an eye gaze tracking and ambient light sensors. The one or more ocular features can include but not limited to pupil diameters and gaze position.
[0047] The system can further include a cognitive engine 102 operatively coupled to the set of sensors 104. The cognitive engine 102 can be configured to determine one or more parameters value from the sensed one or more ocular features; and determine one or more deviation states based on processing of the determined one or more parameters value to enable real-time monitoring of cognitive load of the driver of the vehicle. The one or more parameters value comprises LI Norm of spectrum of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median SI velocity. The one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
[0048] FIG. 2 illustrates exemplary engine of cognitive engine in accordance with an embodiment of the present disclosure.
[0049] As illustrated, in an embodiment, the cognitive engine 102 can include one or more processor(s) 202 configured to process the generated set of control signals to select a set of energy radiating sources from the plurality of energy radiating sources and enable emission of a set of therapeutic signals by the selected set of energy radiating sources, for a preconfigured time period, to provide a desired therapeutic effect to the at least one foot of the user. [0050] In an embodiment, the one or more processor(s) 202 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 202 can be configured to fetch and execute computer-readable instructions stored in a memory 204 of the cognitive engine. The memory 204 can store one or more computer-readable instructions or routines, which can be fetched and executed to create or share the data units over a network service. The memory 204 can be any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0051] The cognitive engine 102 can include an interface(s) 206. The interface(s) 206 can include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 can facilitate communication of the cognitive engine 102 with various devices coupled to the cognitive engine 102 such as an input unit and an output unit. The interface(s) 206 can also provide a communication pathway for one or more components of the cognitive engine and the proposed device 100. Examples of such components include, but not limited to, processing engine(s) 208 and data base 216.
[0052] The processing engine(s) 208can be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208can be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 can include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the processing unit 208 can include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to cognitive engine 102 and the processing resource. In other examples, the processing engine(s) 208 can be implemented by electronic circuitry.
[0053] The database 216 can include data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 208. [0054] In an exemplary embodiment, the processing engine(s) 208 can include an ocular feature determination engine 212, a deviation state determination engine 214, a cognitive load controlling engine 216, and other engine(s) 218, but not limited to the likes. [0055] It would be appreciated that engines being described are only exemplary engines and any other engines or sub-engines may be included as part of the cognitive engine 102 or the processing unit 104. These engines too may be merged or divided into super engines or sub-engines as may be configured.
[0056] In an embodiment, the ocular feature determination engine 212 configured to receive sensed one or more ocular features from the set of sensors associated with the driver of the vehicle. In one embodiment, the set of sensors can be configured with a wearable device or apparatus to be worn by the user to facilitate sensing of the ocular features. In another embodiment, the set of sensors can be configured with vehicle to facilitate sensing of the ocular features while driving and/or riding the vehicle.
[0057] In an embodiment, based on the receipt of the sensed ocular features the ocular feature determination engine 212 can determine one or more parameters value from the received ocular features. The one or more ocular features include but not limiting in any way to pupil diameters and gaze position. The one or more parameters includes but not limiting in only to LI Norm of spectrum of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median SI velocity.
[0058] In an embodiment, the deviation state determination engine 214 can be configured to process the determined one or more parameters to facilitate determination of one or more deviation states. In an embodiment, the deviation state determination engine 214 can be used for estimating cognitive load of a driver of the vehicle. The cognitive load of the driver of the vehicle can be determined by comparing the one or more parameters with a dataset comprising a set of predefined or preconfigured parameter values.
[0059] Since, it has been observed that cognitive load of a person is directly related to ability of the person to perform certain task with proficiency. Therefore, by observing or monitoring in real-time cognition load of the person can help us in segregating different deviation states based on the determined or monitored real-time cognitive load of the person. [0060] In an embodiment, the cognitive load of the driver of the vehicle can be used for segregating deviation into one or more deviation states. The one or more deviation states can be defined based on a primary task and one or more secondary task. The primary task can be defined as the main or designated task that the driver of the vehicle performs, and secondary tasks can be defined as the tasks being performed while performing the primary task.
[0061] For example, if a person is driving a vehicle then the main or the primary task being driving the vehicle and the secondary tasks can be talking, controlling head-up display (HUD) and the like. i.e. the tasks other than performing the primary task of driving the vehicle.
[0062] In an embodiment, based on the estimated or determined cognitive load of the person the cognitive load controlling engine 216 can facilitate in reducing the cognitive load of the person by generating a control signal. The control signal can be used for performing various remedial actions to help reduce the cognitive load of the driver of the vehicle so that the person can perform the primary task without distraction from secondary tasks or with minimal distraction from the secondary tasks.
[0063] For example, if a driver is driving the vehicle, so the primary task in this case is driving the vehicle and performing other tasks such as controlling music, interacting with the HUD etc. can be considered as secondary tasks. In a case if the user is controlling music and hence is distracted therefore, the cognitive load of the driver would be high. Hence, to avoid any hazards such as accident the cognitive load of the user needs to be reduced. Therefore, the cognitive load controlling engine 216 can generate control signal to help minimize the cognitive load or the distraction of the driver.
[0064] In an embodiment, the cognitive load controlling engine 216 can monitor the estimated cognitive load of the person in real time and compare the estimated cognitive load with pre-defined thresholds to help characterise or segregate into one or deviation state. Now, based on assigned or allocated deviation state the control action or remedial action can be taken to facilitate avoiding of hazards.
[0065] FIG. 3 is a flow diagram for monitoring cognitive load of a person using his ocular features in accordance with an embodiment of the present disclosure.
[0066] In an aspect, the proposed method may be described in general context of computer-executable instructions. Generally, computer-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method can also be practised in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer-executable instructions may be located in both local and remote computer storage media, including memory storage devices. [0067] The order in which the method as described is not intended to be construed as a limitation and any number of the described method blocks may be combined in any order to implement the method or alternate methods. Additionally, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, the method may be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method may be considered to be implemented in the above-described system.
[0068] In context of the flow diagram 300, block 302 pertains to sensing one or more ocular features of a driver of the vehicle, using a set of sensors.
[0069] Further, block 304 pertains to determining, by a processor of a cognitive engine operatively coupled to the set of sensors, one or more parameters value from the sensed one or more ocular features.
[0070] Further, block 306 pertains to determining, by the processor, one or more deviation states based on processing of the determined one or more parameters value.
[0071] FIG. 4 illustrates an exemplary block diagram representation of cognitive load monitoring system in accordance with an embodiment of the present disclosure.
[0072] In an exemplary aspect of the proposed cognitive load monitoring system can include an eye tracking device 402 for tracking or extracting various ocular parameters of a driver of the vehicle. The various ocular features can include parameters like pupil data 404, and gaze position 406. In an embodiment, one or more processors of a cognitive engine can be configured to determine and/or extract feature metrics such as LI Norm of spectrum of pupil 408, Low pass filter of spectrum of pupil 410, standard deviation of pupil data 412, fixation rate 414, saccade rate 416 and median SI velocity 418 based on the pupil data 404 and gaze position 406 received from the eye tracking device 402. Further, the one or more processors of the cognitive engine can classify at least three distraction states that can include as driving without secondary task 422, driving with Secondary task 424 and perceived road hazard 426. The classification can be performed by the one or more processors of the cognitive engine based on techniques such as but not limited to neural network 420.
[0073] Further, based on the classified at least three distraction states the cognitive engine can generate a control signal. In an embodiment, the generated alert signal can be used for alerting the driver of the vehicle based on these distraction states from the monitoring system. [0074] FIG. 5 illustrates an exemplary block diagram representation of process of alert system in accordance with an embodiment of the present disclosure.
[0075] In an exemplary embodiment, the proposed system can be incorporated with
HUD system of a vehicle to generate alerts based on detection of perceived danger or hazard by monitoring various ocular features of a driver of the vehicle. In an embodiment, an eye tracker 502 of the proposed system can be used for sensing or capturing various ocular features of the eyes of the driver of the vehicle. Further, based on the captured or sensed ocular features one or more processors of the cognitive engine, operatively coupled to the eye tracker device 502, can be used for determining various features such as detect eyes-off-road using an eye off road detection system 504, and monitor cognitive load of the driver using the cognitive load monitoring system 506. Exemplarily, if an eyes-off-road event is detected by the eye off road detection system 504, the one or more processors of the cognitive engine can alert the driver by an auditory sound followed by a voice note telling “please concentrate on driving” an LED strip glows with blinking pattern to alert the driver visually etc. In an embodiment, the cognitive load monitoring system 506 can be configured to classify the current event into at least three different distraction states.
[0076] Further, the cognitive engine could be configured to detect or determine if the driver is performing a secondary task by comparing the sensed attributes with a predefined threshold. If the pre-defined threshold value is breached for any value then it can be perceived that the user is performing secondary task. If the cognitive engine detects that the driver is performing secondary task or perceiving a road hazard, it will lock the secondary tasks from being operated by the driver. For example, in case a call or SMS is received on mobile phone of the driver, the cognitive engine will pop up the notification for the driver if his cognitive load is high than threshold. The HUD will display only important items on the screen instead of regular infotainment icons. The icon size and colours will adapt according to the cognitive state of driver. When his load comes down, it will slowly notify driver about his missed calls and SMS one by one and he will have access to operate other secondary tasks like music player, etc.
[0077] Various experimentations were conducted for analysing, observing and determining various aspect of the instant invention. Methodology involved includes extraction of features like fixation rate and SI velocity from eye gaze points. Further, during experimentation the values extracted includes Sum of Magnitude of Single sided Spectrum (LINS) and Standard deviation of pupil dilation (STDP) from pupil dilation. LI Norm of Spectrum (LINS) using FFT
[0078] An FFT (Fast Fourier Transform) was performed over the raw data of pupil dilation, head yaw and EEG (T7). We summed up the magnitude values of bins corresponding to lHz to 5Hz [Onorati 2013] in the single-sided spectrum. We did this procedure for full length of the signal as well as in time buffers of 1 second for real-time implementation. We calculated the LINS for each second and store in an array LljVscl corresponding to datacl. We repeated the same procedure to calculate L1NSC2 and L1NSC3 from dataC2 and dataC3 respectively. For each participant, we calculated the mean of L1NSC1,L1NSC2 andLl/V5C3 and checked if L1NSC3 > L1NSC2 and L1NSC3 > L1NSC1. We repeated for all participants and checked if L1NSC3 was significantly greater than L1NSC2 and L1NSC1.
Standard deviation of pupil dilation (STDP)
We calculated the standard deviation of the pupil dilation data in a running gaussian window of 1 sec with an overlap of 70%.
[0079] As used herein, and unless the context dictates otherwise, the term “coupled to” is intended to include both direct coupling (in which two elements that are coupled to each other or in contact each other)and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously. Within the context of this document terms “coupled to” and “coupled with” are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
Velocity of Saccadic Intrusions (SI)
We extracted 2D gaze positions (x,y) and their corresponding timestamps from the data file of Tobii glasses and stored in x, y and t respectively. Here, the camera resolution is
1920x1080 and the horizontal visual angle of Tobii glasses is 82° [Tobii 2018], so the number of pixels within 0.4° is 9.3 pixels. We store the gaze point’s timestamp in a hash table with gaze coordinates as indices. We find maximum deviation of gaze point-x till the gaze point revisits its position in hash table. We count SI if the maximum deviation is greater than 0.4° visual angle and return time is between 60ms and 870ms. max deviation of qaze — x
SI velocity = - — - return time Fixation Rate and Saccade Rate
We calculated number of fixations from 3D gaze direction vectors for left and right eye separately. We calculated the visual angle and velocity of gaze movement from gaze direction vectors. We set a threshold of 100°/sec above which it is counted as saccade. The number of fixations is counted as number of saccades minus 1. The fixation rate is calculated as number of fixations per second.
Number of Fixations
Fixation Rate =
(End time — Start time ) of fixation
Number of saccade
Saccade Rate —
(End time — Start time ) of saccade
Study with professional drivers
[0080] Experimentation was conducted to validate if ocular parameters can distinguish between different cognitive loads caused by performing secondary tasks in cars. We started by collecting qualitative information from each driver.
Participants
[0081] A set of 13 professional male drivers with an average age of 36 years and a standard deviation of 8 years undertook the study. All drivers were hired from a travel agency. All drivers had an average driving experience of 7150 km with a standard deviation of 2700 km. The average number of years the drivers drove holding a valid license is 11 years with a standard deviation of 7 years. All participants have a valid two-wheeler and four- wheeler Indian license.
Material
[0082] Tobii Pro glasses 2 was used to record the video as well as the eye metric data.
Tobii Pro software was used to export the data into TSV file. We used MATLAB to analyse data and find the instants where the driver performed secondary tasks.
Design
[0083] We have designed the study such that each driver had to start driving his vehicle from a fixed start point to a fixed location inside the campus and return to the same start point. We recorded the eye metric data and scene camera video for each driver during the trip. On the way they were given secondary tasks to operate. We observed pupil and gaze parameters for estimating cognitive load of drivers.
[0084] Types of secondary task given to drivers:
1. Induced secondary tasks a. Asked driver to turn on/off AC b. Asked driver to turn on/off music player or change FM radio station
2. Voluntary secondary tasks a. Driver talking with passengers b. Driver opening/closing windows
3. Involuntary secondary tasks a. A call was made to driver while driving from an unknown number without his knowledge
Procedure
[0085] Before start of trial drivers were interviewed about their driving experience and qualitative questions regarding their perspective of distraction. Participants were free to generate their own reasoning behind why they may or may not engage with a range of different technological tasks while driving across the road types. The researcher probed the participant to expand on their discussion points for clarity and further information where necessary. All the interviews were conducted by the same primary researcher for consistency. [0086] The drivers were asked to wear Tobii glasses and walk through a calibration process before they started their ride. The drivers were instructed to start driving his vehicle from a start point and come back to same point after riding within campus. The eye gaze points, pupil dilation data and scene video were recorded for each driver. The recorded data was then analysed for LINS, STDP and SI velocity during the secondary task events.
[0087] The videos were tagged for timestamps corresponding to start and end of each secondary task event the timestamps were also tagged where the driver did not perform any secondary tasks nor observing a road hazard. STDP, LINS and SI velocity were then calculated for the pupil data and eye gaze data corresponding to events. We checked if the parameter values were high during operating secondary tasks than the events were the driver performed no tasks.
Results
STDP for different secondary tasks
[0088] It was observed during the experimentation that STDP of both eyes were significantly (Signed-Rank Test: p<0.01) higher for operating secondary task than driving without any secondary task.
LINS for different secondary tasks [0089] It was observed during the experimentation that LINS of both eyes were significantly (Signed-Rank Test: p<0.01) higher for operating secondary task than driving without any secondary task.
SI velocity for different secondary tasks
[0090] It was observed during the experimentation that a weak evidence of SI velocity
(Signed-Rank Test: p<0.1) higher for operating secondary task than driving without any secondary task.
[0091] FIG. 6 illustrates exemplary graphical representation of comparing accuracy of different ocular parameters individually and different machine learning models in terms of estimating cognitive load.
[0092] We have used different time series data like STDP, LINS, SI, Saccade rate and used machine learning models to predict ‘No task’ and ‘Task’ classes. Initially we have started our prediction model by using Support Vector Classifier (SVC) with different kernels (for e.g., Polynomial kernel, Radial Basis Function kernel (RBF)). Later we compared results of the SVC model with Neural Network (NN) model. RBF kernel on two samples x and x', represented as feature vectors in some input space , is defined as K(X, X ) = exp(— g\ \c, x ||2), where ||x, x ||2 may be recognized as squared Euclidean distance between two feature vectors. SVC classifier using RBF kernel has two parameters, g and C. If we change value of g from low to high, the curve of the decision boundary also changes from low to high. Correspondingly decision region also changes from broad area to small islands around data points. C is the penalty for misclassifying a data point. In our studies, we have experimented with different combination of values for g and C and obtained highest accuracy of 67.18% with the value of g and C as 0.001 and 1 respectively. Polynomial kernel can be formulated as K{X, X ) = (xrx + l)d, wherex, x represent feature vector in some input space with degree of d. Polynomial Polynomial kernel SVC tends to work better without scaling the dataset. We obtained 62.5% accuracy with value of g, C, and d as 100, 1, and 6 respectively. To increase the accuracy of the experiment, we have introduced feed forward neural network model, where layers and number of neurons in the model is structured as: 6 - 160 - 80 - 1 (input layer - hidden layer - hidden layer - output layer). We have used ReLU ((/(x) = max(0, x)), xis the input feature) activation function in hidden layers. We have used Sigmoid ((/(x) = 1/(1 + e-x)), x is the input feature) activation function in output layer as our problem is binary classification (predict either O(No-Task) or l(Task)). We have used ‘Adam’ optimization algorithm to overcome vanishing learning rate, slow convergence and high variance in the parameter updates. We have used ‘binary cross entropy’ loss function. In all machine learning model, we have trained our models on 23 data points and tested on 128 data points. We obtained highest accuracy of 73.44% using feed forward neural network model.
[0093] Hence, from the experimentation it can easily be inferred that that the LINS,
STPD values were significantly higher for drivers during operation of secondary tasks than when they were not doing any task.SI velocity showed significantly higher values for secondary tasks like AC, window and talking. We are planning to send the trigger based on cognitive load to a monitoring system which will decide to alert the driver or to lock the secondary tasks. By setting the threshold for each driver based on his individual value of pupil dilation or SI velocity, we can classify if he had gone through an increase in cognition. We can monitor such increase in cognitive load events and generate an alert.
[0094] We compared the accuracy of distraction detection between individual parameters and the Neural Network model. We analysed the accuracy with respect to individual threshold corresponding to each driver as well as universal threshold to all drivers. We took Task region as positive and No_task region as negative. We counted True positive (TP), False Positive (FP), True Negative (TN), False Negative (FN) as follows:
TP = If (parameter > threshold ) and lies in Task region FP = If (parameter > threshold ) and lies in No_task region FN = If (parameter < threshold)and lies in Task region TN = If (parameter < threshold)and lies in Nojtask region
Figure imgf000019_0001
[0095] Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refer to at least one of something selected from the group consisting of A, B, C ....and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc. [0096] While some embodiments of the present disclosure have been illustrated and described, those are completely exemplary in nature. The disclosure is not limited to the embodiments as elaborated herein only and it would be apparent to those skilled in the art that numerous modifications besides those already described are possible without departing from the inventive concepts herein. All such modifications, changes, variations, substitutions, and equivalents are completely within the scope of the present disclosure. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0097] The present disclosure provides system and method for monitoring cognitive load of a driver of the vehicle.
[0098] The present disclosure provides system and method that can non-invasively and non-contact based determine cognitive load.
[0099] The present disclosure provides system and method for monitoring cognitive load of a driver of the vehicle in real time to help avoid any hazard.
[00100] The present disclosure provides system and method for monitoring cognitive load of a driver of the vehicle that is cost effective and easy to implement.
[00101] The present disclosure provides system and method for monitoring cognitive load of a driver of the vehicle that can be configured with a vehicle to help minimize hazards due to negligence of the driver.

Claims

We Claim:
1. A system for vehicle for monitoring cognitive load of a driver of a vehicle, said system comprising: a set of sensors (104) for sensing one or more ocular features of the driver; a cognitive engine (102) operatively coupled to the set of sensors (104), the cognitive engine (102) comprising a processor (202) coupled to a memory (204), the memory (204) storing instructions executable by the processor (202) to: determine one or more parameters value from the sensed one or more ocular features; and determine one or more deviation states based on processing of the determined one or more parameters value to enable real-time monitoring of cognitive load of the driver.
2. The system as claimed in claim 1, wherein the system comprises an alert generation engine for generating alert signal based on the determined one or more deviation states.
3. The system as claimed in claim 1, wherein the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
4. The system as claimed in claim 1, wherein the set of sensors (104) comprises any or a combination of an eye gaze tracking and ambient light sensors.
5. The system as claimed in claim 1, wherein the one or more ocular features comprises pupil diameters and gaze position.
6. The system as claimed in claim 1, wherein the one or more parameters comprises Sum of Magnitude of Single sided norm of Spectmm(LlNS) of pupil, Low pass filter of spectrum of pupil, standard deviation of pupil, fixation rate, saccade rate and median saccadic intrusion (SI) velocity.
7. A vehicle comprising a system as claimed in claim 1 for monitoring cognitive load of a driver of the vehicle.
8. A method for monitoring cognitive load of a driver of a vehicle, said method comprising the steps of: sensing, by a set of sensors (104), one or more ocular features of the driver; determining, by a processor (202) of a cognitive engine (102) operatively coupled to the set of sensors (104), one or more parameters value from the sensed one or more ocular features; and determining, by the processor (202), one or more deviation states based on processing of the determined one or more parameters value.
9. The method as claimed in claim 8, wherein the one or more deviation states comprises at least: a first deviation state pertaining to performing a primary task only; a second deviation state pertaining to performing the primary task and a secondary task simultaneously; and a third deviation state pertaining to a perceived hazard, wherein the primary task pertains to designated task, and the secondary task pertains to a task other than the primary task.
PCT/IB2020/062016 2019-12-17 2020-12-16 System and method for monitoring cognitive load of a driver of a vehicle Ceased WO2021124140A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20902881.0A EP4076191A4 (en) 2019-12-17 2020-12-16 System and method for monitoring cognitive load of a driver of a vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201941052358 2019-12-17
IN201941052358 2019-12-17

Publications (1)

Publication Number Publication Date
WO2021124140A1 true WO2021124140A1 (en) 2021-06-24

Family

ID=76478345

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/062016 Ceased WO2021124140A1 (en) 2019-12-17 2020-12-16 System and method for monitoring cognitive load of a driver of a vehicle

Country Status (2)

Country Link
EP (1) EP4076191A4 (en)
WO (1) WO2021124140A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114595714A (en) * 2022-02-23 2022-06-07 清华大学 A driver's cognitive state identification method and system based on multi-source information fusion
CN114720938A (en) * 2022-03-22 2022-07-08 南京理工大学 Single-bit sampling DOA estimation method for large-scale antenna arrays based on depth expansion
CN115429275A (en) * 2022-09-30 2022-12-06 天津大学 A driving state monitoring method based on eye movement technology
CN117636488A (en) * 2023-11-17 2024-03-01 中国科学院自动化研究所 Multimodal fusion learning ability assessment methods, devices and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1627915A (en) * 2002-02-19 2005-06-15 沃尔沃技术公司 System and method for monitoring and managing driver attention loads
JP2007068917A (en) * 2005-09-09 2007-03-22 Nissan Motor Co Ltd Visual state determination device, automobile, and visual state determination method
CN101278324A (en) * 2005-08-02 2008-10-01 通用汽车环球科技运作公司 Adaptive Driver Workload Estimator
US20130188838A1 (en) * 2012-01-19 2013-07-25 Utechzone Co., Ltd. Attention detection method based on driver's reflex actions
CN103445793A (en) * 2012-05-29 2013-12-18 通用汽车环球科技运作有限责任公司 Estimating congnitive-load in human-machine interaction
CN108256487A (en) * 2018-01-19 2018-07-06 北京工业大学 A kind of driving state detection device and method based on reversed binocular
CN109878527A (en) * 2017-12-04 2019-06-14 李尔公司 Divert one's attention sensing system
CN110169779A (en) * 2019-03-26 2019-08-27 南通大学 A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016115053A1 (en) * 2015-01-12 2016-07-21 Harman International Industries, Incorporated Cognitive load driving assistant
US10357195B2 (en) * 2017-08-01 2019-07-23 Panasonic Intellectual Property Management Co., Ltd. Pupillometry and sensor fusion for monitoring and predicting a vehicle operator's condition
US11017249B2 (en) * 2018-01-29 2021-05-25 Futurewei Technologies, Inc. Primary preview region and gaze based driver distraction detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1627915A (en) * 2002-02-19 2005-06-15 沃尔沃技术公司 System and method for monitoring and managing driver attention loads
CN101278324A (en) * 2005-08-02 2008-10-01 通用汽车环球科技运作公司 Adaptive Driver Workload Estimator
JP2007068917A (en) * 2005-09-09 2007-03-22 Nissan Motor Co Ltd Visual state determination device, automobile, and visual state determination method
US20130188838A1 (en) * 2012-01-19 2013-07-25 Utechzone Co., Ltd. Attention detection method based on driver's reflex actions
CN103445793A (en) * 2012-05-29 2013-12-18 通用汽车环球科技运作有限责任公司 Estimating congnitive-load in human-machine interaction
CN109878527A (en) * 2017-12-04 2019-06-14 李尔公司 Divert one's attention sensing system
CN108256487A (en) * 2018-01-19 2018-07-06 北京工业大学 A kind of driving state detection device and method based on reversed binocular
CN110169779A (en) * 2019-03-26 2019-08-27 南通大学 A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4076191A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114595714A (en) * 2022-02-23 2022-06-07 清华大学 A driver's cognitive state identification method and system based on multi-source information fusion
CN114720938A (en) * 2022-03-22 2022-07-08 南京理工大学 Single-bit sampling DOA estimation method for large-scale antenna arrays based on depth expansion
CN115429275A (en) * 2022-09-30 2022-12-06 天津大学 A driving state monitoring method based on eye movement technology
CN117636488A (en) * 2023-11-17 2024-03-01 中国科学院自动化研究所 Multimodal fusion learning ability assessment methods, devices and electronic equipment

Also Published As

Publication number Publication date
EP4076191A4 (en) 2024-01-03
EP4076191A1 (en) 2022-10-26

Similar Documents

Publication Publication Date Title
Mahanama et al. Eye movement and pupil measures: A review
Sikander et al. Driver fatigue detection systems: A review
WO2021124140A1 (en) System and method for monitoring cognitive load of a driver of a vehicle
Santini et al. Bayesian identification of fixations, saccades, and smooth pursuits
Schmidt et al. Eye blink detection for different driver states in conditionally automated driving and manual driving using EOG and a driver camera
Guettas et al. Driver state monitoring system: A review
Yang et al. Multimodal sensing and computational intelligence for situation awareness classification in autonomous driving
KR20210060595A (en) Human-computer interface using high-speed and accurate tracking of user interactions
Panagopoulos et al. Forecasting markers of habitual driving behaviors associated with crash risk
Jiang et al. Capturing and evaluating blinks from video-based eyetrackers
Alam et al. Active vision-based attention monitoring system for non-distracted driving
Pandey et al. A survey on visual and non-visual features in Driver’s drowsiness detection
Shimada et al. High-frequency cybersickness prediction using deep learning techniques with eye-related indices
Kolus A systematic review on driver drowsiness detection using eye activity measures
Prabhakar et al. Comparing pupil dilation, head movement, and eeg for distraction detection of drivers
Khan et al. Efficient Car Alarming System for Fatigue Detectionduring Driving
Kamboj et al. Advanced detection techniques for driver drowsiness: a comprehensive review of machine learning, deep learning, and physiological approaches
Grimmer et al. The cognitive eye: Indexing oculomotor functions for mental workload assessment in cognition-aware systems
Zhang et al. EEG signal analysis for early detection of critical road events and emergency response in autonomous driving
Dang et al. A review study on the use of oculometry in the assessment of driver cognitive states
Peddarapu et al. Raspberry pi-based driver drowsiness detection
Luo et al. Understanding Hazard Recognition Behavior Using Eye-tracking Metrics in a VR-Simulated Environment: Learning from Successful and Failed Conditions
Bulygin et al. Image-based fatigue detection of vehicle driver: state-of-the-art and reference model
Azman et al. Non-intrusive physiological measurement for driver cognitive distraction detection: Eye and mouth movements
Bajaj et al. Performance analysis of hybrid model to detect driver drowsiness at early stage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20902881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020902881

Country of ref document: EP

Effective date: 20220718