WO2024220761A1 - Method and system for measuring brain reflexes and the modulatory effect of engagement and lifestyle - Google Patents
Method and system for measuring brain reflexes and the modulatory effect of engagement and lifestyle Download PDFInfo
- Publication number
- WO2024220761A1 WO2024220761A1 PCT/US2024/025344 US2024025344W WO2024220761A1 WO 2024220761 A1 WO2024220761 A1 WO 2024220761A1 US 2024025344 W US2024025344 W US 2024025344W WO 2024220761 A1 WO2024220761 A1 WO 2024220761A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- headset
- processing units
- metric
- exercise
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Definitions
- a method for measuring emotional engagement while watching video or picture content may be provided.
- the method may include displaying, on a display, a first video or picture content.
- the method may include, while an individual is watching the first video or picture content on the display: 1) using a camera to capture second video including one or more eyes of the individual; and 2) while capturing the second video, exposing the individual to one or more visual and/or auditory stimuli.
- the method may include determining values representing eye closure based on the second video.
- the method may include calculating a metric (such as cognitive load, effectiveness of training, etc.) based on the values.
- the metric may be emotional engagement (e.g., with the first video), and the metric may be determined based on an eyelid startle response. In some embodiments, the metric may be the impact of physical activity, and the metric may be determined based on eyeblink conditioning. All steps of the method may be performed on a single local device (such as a desktop computer, laptop computer, mobile phone, or tablet). Some of the steps may be performed using one or more remote processing units. For example, the steps of determining values and calculating the metric may be performed by one or more remote processing units. The method may include receiving first information from one or more remote processing units, the first information including the first video or picture content.
- the first information may also include information related to the stimuli to expose the individual to (e.g., the one or more visual and/or auditory stimuli, values representative of the one or more visual and/or auditory stimuli, or both).
- the method may include sending second information to one or more remote processing units, the second information including the second video.
- the display and camera may be operably coupled to a headset (such as a virtual reality (VR) headset, or an augmented reality (AR) or mixed reality (MR) headset).
- the headset may be operably coupled to one or more processing units performing the method.
- Calculating the metric may be performed by a trained machine learning algorithm that has been trained using categorized videos and/or picture content.
- the metric may be based at least partially on detected alpha startle responses.
- the method may include displaying the metric to the individual.
- a system for measuring emotional engagement while watching video or picture content may be provided.
- the system may include a display, a camera, a speaker, a memory, one or more processing units operably coupled to the display, camera, speaker, and memory, and a non-transitory computer-readable storage medium.
- the storage medium may include instructions that, when executed by the one or more processing units, cause the one or more processing units to, collectively, perform the disclosed method.
- the system may be configured as a desktop computer, laptop computer, mobile phone, or tablet.
- the one or more processing units may include one or more local processing units and one or more remote processing units.
- the one or more remote processing units may be configured to perform one or more steps of the method.
- the one more processing units may, collectively, determine values and calculate the metric.
- the display and camera may be operably coupled to a headset (such as a virtual reality (VR) headset, or an augmented reality (AR) or mixed reality (MR) headset).
- the headset may be operably coupled to one or more processing units performing the method.
- VR virtual reality
- AR augmented reality
- MR mixed reality
- FIGURES The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present invention and, together with a general description of the invention given above, and the detailed description of the embodiments given below, serve to explain the principles of the present invention.
- Figure 1 is a schematic illustration of a system.
- Figure 2 is an illustration of a headset.
- Figure 3 is a flowchart of a method.
- Figure 4 is an illustration of a template for tracking facial landmarks, and eye landmarks in particular.
- Figures 5A-5B are graphs showing exemplary normalized eyelid closure data as determined by an AI analyzing facial images of a user watching videos categorized as negative, neutral, or positive, when exposed to only a loud sound (5A), or a soft sound followed by a load sound (5B).
- Figures 5C-5D are graphs showing eyelid closure of alpha startles as determined by an AI analyzing facial images of groups of users watching videos categorized as negative, neutral, or positive, when exposed to only a loud sound (5C), or a soft sound followed by a loud sound (5D).
- Figure 6A shows graphs of conditioned response (CR) amplitude by session for paired (CS + US) and CS-only trials combined in the sedentary and active groups with or without exercise before eyeblink conditioning sessions. Active individuals showed significant conditioning with the post-exercise group showing significantly higher conditioned response amplitudes at sessions 1 and 2 compared to the no exercise group. shading represents standard error of the mean.
- Figures 6B and 6C shows graphs for sedentary (6B) and active (6C) group averaged eyelid traces for paired (CS + US) trials (top panels) and CS-only trials (bottom panels) without (left panels) or after (right panels) exercise for three eyeblink conditioning sessions.
- Lightly shaded blocks indicate the presentation of the CS for 450 ms and darker shaded blocks indicate the actual (US) or expected (US omitted) presentation of the US for 50 ms co-terminating with the CS at 450 ms.
- paired trials note the peak in amplitude following the presentation of the US, namely the unconditioned response (UR) present in all groups.
- UR unconditioned response
- the acquisition of conditioned responses over the three sessions is also illustrated by the rise in amplitude in the CS-only trials, again particularly obvious in the active, post-exercise group.
- Figures 7A and 7B are graphs showing distribution of latency to conditioned response peak for all conditioned stimulus (CS) only trials across all sessions in sedentary (7A) and active (7B) groups with or without exercise.
- the darker shaded block at 400 ms indicates the expected onset of the unconditioned stimulus (omitted US) which is omitted in these trials.
- the lighter shaded block indicates the presentation of the CS. Note the distribution centred roughly around the expected onset of the US at 400 ms for all groups.
- Figures 8A and 8B are graphs and box-plots showing group averaged unconditioned response amplitudes for sedentary (8A) or active (8B) individuals with (solid line) or without (dashed line) exercise preceding the eyeblink conditioning session. Unconditioned response amplitudes were calculated for the first two blocks of session 1, prior to the development of conditioned responses.
- the present disclosure provides a method and system for measuring the impact or effectiveness of an activity based on a measure of brain reflexes. For example, one can measure emotional engagement in response to visual stimuli or effectiveness of a physical activity based on the brain reflexes. Said differently, the disclosed techniques can be used to correlate the determined values relating to eye movement and eye blinks to various metrics of interest.
- Brain reflexes are basic and unconscious responses that can be used as indicators of the functional integrity of the nervous system. An important reflex is the acoustically evoked eyelid startle reflex, which has been studied for more than fifty years.
- the startle reflex can serve as an effective unconditioned stimulus (US) in Pavlovian eyeblink conditioning, which is a well-known method for studying the neural correlates of procedural learning and memory.
- US unconditioned stimulus
- CS conditioned stimulus
- CR conditioned response
- Any appropriate CS and/or US may be utilized. This may include one or more visual stimuli, such as a particular video, image, or flash of light (such as a front-facing camera flash, or even an all-white image being displayed on a screen), etc.
- This may include one or more auditory stimuli, such as a tone generated at one or more frequencies, white noise, etc.
- auditory and/or visual stimuli may be configured to generate a startle response.
- PPI prepulse inhibition
- PPI is the behavioral phenomenon whereby the magnitude of the startle response is inhibited when a short and loud startling stimulus (the pulse, such as a loud sound) is preceded by a weaker stimulus that does not elicit a startle reflex (the prepulse, such as a quieter sound).
- PPI measures sensorimotor gating, which is the mechanism of the nervous system to filter out irrelevant sensory information to protect the brain from overstimulation and enabling appropriate reaction to stimuli that are relevant.
- PPI is less brain region specific and probes midbrain function and modulatory effects that the midbrain receives from limbic systems, thalamus, and prefrontal areas.
- a system for measuring impact or effectiveness of an activity may be provided. Referring to FIG.
- the system (100) may include one or more devices (110).
- the system may include a display (111), a camera (112), a speaker (113), a memory (114), one or more processing unit(s) (115), and a non-transitory computer-readable storage medium (116).
- the system may include a microphone (117).
- the storage medium may include instructions that, when executed by the one or more processing units, cause the one or more processing units to, collectively, perform specific steps of a method.
- processing unit generally refers to a computational device capable of accepting data and performing mathematical and logical operations as instructed by program instructions. This may include any central processing unit (CPU), graphics processing unit (GPU), core, hardware thread, or other processing construct known or later developed.
- the term “thread” is used herein to refer to any software or processing unit or arrangement thereof that is configured to support the concurrent execution of multiple operations.
- the system may be configured as (or may include) a desktop computer, laptop computer, mobile phone, or tablet.
- only the processing units on the device e.g., on a smartphone, etc.
- one or more steps may be performed by remote processing units.
- the one or more processing units may include one or more local processing units (e.g., processing unit(s) (115)) and one or more remote processing units (e.g., remote processing unit(s) (120) and/or remote processing unit(s) (141)).
- remote processing unit(s) (120) may be a cloud-based processing unit.
- remote processing unit(s) (141) may be configured to receive and/or display information to a remote user (140), e.g., a clinician, doctor, researcher, etc.
- the system may include headphones (131) for a user (130) to wear.
- the display (111) and camera (112) may be operably coupled to a headset (200).
- the display and camera may be disposed within a headset housing (201).
- the headset may be a virtual reality (VR) headset (e.g., a headset that provides a fully virtual experience, where the user can only see the display provided in the headset), an augmented reality (AR) headset (e.g., a headset providing a live or near-live image of the physical world captured by a camera into which a computer-generated object or objects are superimposed so as to appear to be a part of the physical world when the live or near-live image and the object or objects are displayed on a screen.
- VR virtual reality
- AR augmented reality
- a display screen or other controls may cause the augmented reality to adjust as changes to the captured images of the physical world indicate updated perspectives of the physical world), or a mixed reality (MR) headset (a headset means a combined virtual objects and spaces and physical reality objects. It is closely related to augmented reality but may include, for example, a projection of an actual image of a person who is in a different physical location, using cameras to capture that person's image, then superimposing that person within a different physical environment using augmented reality). As shown in FIG.2, the headset may have a strap (202) configured to hold the headset on a user’s head. The headset may be operably coupled to one or more processing units performing the method, either wirelessly or wired.
- FIG. 1 the headset may be operably coupled to one or more processing units performing the method, either wirelessly or wired.
- a wire (211) is used to couple the headset (200) to a housing (210) containing the memory (114), processing unit(s) (115), and non-transitory computer-readable storage medium (116).
- the processing unit(s) may be configured to collectively perform various steps of a method.
- the method (300) may optionally include receiving (310) first information from one or more remote processing units.
- the first information may include information defining or relating to a video or image that may be displayed to a user. In some embodiments, the video or image to be displayed is what is received.
- the researcher could send a video or image directly to a user’s device, or the research could send a URL to a user’s device, after which the device could process that URL and download a video or image found at the URL, storing it for later use.
- the researcher could also send information stating the length and intensity of any prepulses or pulses used for stimuli.
- the video or image to be displayed is randomly determined.
- the first information may also include information related to the stimuli to expose the individual to (e.g., the one or more visual and/or auditory stimuli, values representative of the one or more visual and/or auditory stimuli, or both).
- the method may include testing (320) brain reflexes of a user.
- the testing may include, while an individual is watching the first video or picture content on the display, capturing (324) (e.g., with camera (112)) a second video including one or more eyes of the individual.
- the testing may include, while capturing the second video, exposing (326) the individual to one or more visual and/or auditory stimuli. Any appropriate visual or auditory stimuli may be utilized.
- a camera flash or causing the display to flash bright white for a brief amount of time may be used as a visual stimulus.
- a tone such as a beep, or white noise
- the second video may capture video for a period of time before the stimuli, during the stimuli, and for a period of time after the stimuli. The period of time after the stimuli may be up to 500 ms after the stimuli.
- the method may optionally include sending (330) second information to a remote processing unit, the second information including the second video.
- the method may include determining (340) values representing eye-related movements, such as eye closure, based on the second video.
- This may also include determining values representing blink amplitude, blink duration, and blink timing based on the second video including one or more eyes of the individual.
- Blinks elicited by the presentation of a blink- evoking stimulus such as an unexpected loud sound or visual stimulus, may be determined.
- spontaneous blinks may be determined.
- the eye-related movements may include a spontaneous eye blink.
- the eye-related movements may include a reflex eye blink. In general, spontaneous blinks occur without any external stimuli and/or internal effort, while reflex blinks typically occur in response to external stimuli.
- One type of reflex blink is an anticipatory eye blink, that may be developed during eyeblink conditioning.
- the eye-related movements may include eye position tracking.
- the eye position tracking may include the tracking of (i) fast eye movement (saccades and micro-saccades), (ii) smooth pursuit movements, and/or (iii) vestibulo-ocular movements.
- the device if eye position tracking is utilized, the device is configured to utilize a VR-type viewer as described herein.
- the eye-related movements may include pupil size tracking to measure the user's alertness. As is known in the art, pupil size decreases as alertness wanes. By analyzing captured images in order to measure the pupil diameter, and optionally normalizing them, the pupil size can be tracked over time in order to determine if the user is sufficiently alert.
- a level of alertness is determined by comparison the pupil size to other pupil size measurements gathered during the user's testing. In some embodiments, a level of alertness is determined by comparing a measured pupil size to a threshold. In some embodiments, the eye pupil size tracking may be used to measure conditioned pupil responses. This is similar to eyeblink conditioning, but where the pupil size is measured instead of the eyelid position. That is, an image is captured containing the pupil, the pupil diameter is measured, and preferably normalized, after experiencing conditional and unconditional stimuli, just as is done using FEC for eyeblink conditioning. For example, computer vision and image processing techniques may be used to detect fully automated and real-time landmarks on a human face.
- the algorithm is optimized to provide fast and accurate tracking of eyelids in both adults and infants. Any appropriate technique known to train a machine-learning algorithm can be utilized here.
- An algorithm may be used to detect a plurality of landmarks on the face.
- FIG.4 an example of a template (400), using 68 landmarks, is shown.
- the template (400) may comprise or consist of 6 landmarks for each eye captured in the image. The six landmarks are, as seen in FIG.4, a left corner (401), an upper left eyelid mark (402), an upper right eyelid mark (403), a right corner (404), a bottom right eyelid mark (405), and a bottom left eyelid mark (406).
- FECN0RM 1 - (FEC - FECMIN)/(FECMAX).
- An FECNORM of 0 corresponds to an eye that is fully open
- an FEC NORM of 1 corresponds to an eye that is fully open.
- the Apple ARKit’s blend shape coefficients and MediaPipe can provide coefficients (generally values from 0.0 to 1.0) for detected facial expressions, including right and left eye blink closures (eyeBlinkRight and eyeBlinkLeft, respectively). In some embodiments, where two eyes are detected, various techniques may be used.
- An FEC may be calculated for each eye and the results may be, e.g., averaged together (or otherwise statistically combined). An FEC may be calculated for each eye, and the minimum value may be utilized. An FEC may be calculated for each eye, and the maximum value may be utilized. An FEC may be calculated for each eye, and a difference between the two FEC values may be determined. If the difference is above a threshold, the value of a flag may be set to 1 or a variable may be increased, indicating an anomalous response occurred. In some embodiments, if no eyes are detected in a given image, or more than two eyes are detected, the image may be skipped.
- a calibration sequence may have occurred prior to these steps, and FEC MIN and FECMAX values may be determined based the images or video captured during calibration. In some embodiments, FEC MIN and FEC MAX values may be determined based solely on the images or video captured as part of the testing described above.
- FEC MIN and FEC MAX values may be determined based solely on the images or video captured as part of the testing described above.
- FIG. 5A when a user has been exposed to a stimulus (such as an unexpected loud sound) the individual may close their eyes to some extent. There may be an alpha startle (501) in response to the loud noise. There may also be a beta startle (502) response that appears some time after the alpha startle.
- the method may include calculating (350) a metric based on the values.
- Calculating the metric may be performed by a trained machine learning algorithm that has been trained using categorized videos and/or picture content.
- the method may include training (360) the machine learning algorithm.
- the metric may be calculated by comparing the value representing the eye-related movement to a calibration curve or to predetermined threshold ranges. These calibration curves or threshold ranges may be specific to the individual, or may be a generic calibration curve or threshold range that applies to multiple users. As an example, for emotion engagement, in some embodiments the calibration curve or threshold ranges may be determined by showing a user (or a plurality of users) a plurality of randomized videos or images.
- the plurality of randomized videos or images may include at least one video or image that is known to have a positive valence (e.g., a calming video or a cute image) and at least one video or image that is known to have a negative valence (e.g., an upsetting image or a video that generates fear).
- a positive valence e.g., a calming video or a cute image
- a negative valence e.g., an upsetting image or a video that generates fear.
- That calibration curve or threshold range can then be used to correlate the eye-related movement to the degree that a test video or image generates an emotional (positive or negative) response in the user.
- the metric may be based at least partially on detected alpha startle responses. In some embodiments, the metric may be based at least partially on detected beta startle responses. In some embodiments, the metric may be emotional engagement (e.g., with the first video), and the metric may be determined based on an eyelid startle response. In some embodiments, the metric may be the impact of physical activity, and the metric may be determined based on eyeblink conditioning. FIGS.
- 5A-5D are graphs related to an example where participants were shown short video clips on, e.g., a smartphone with either neural, positive, or negative valence, and then exposed to a pulse and optionally a pre-pulse of stimuli, the stimuli being a bright light (camera flash) and a loud noise (white noise).
- Video clips were taken from the Database of Emotion Videos from Ottawa (DEVO).
- Statistically significant differences between the three valences can be detected in terms of the degree of eye closure experienced when exposed to a loud noise with (FIGS.5B, 5D) or without (FIGS.5A, 5C) a pre-pulse.
- the eye closure amount (and/or blink rate, auditory startle responses, etc.) may be used to determine a metric (such as emotional engagement, sufficiency of exercise, etc.). In some embodiments, those determine metric(s) may be used to determine an additional metric. For example, a score for a video may be determined that is an average of the emotional engagement determined by the eye closure amount across a plurality of individuals who watched the video.
- the instructions on the storage medium may cause the processing unit(s) to include two parts or modules: a testing module and an analysis module.
- the testing module presents visual stimuli to the participant and records the physiological responses (see FIG.3, testing (320) step).
- the visual stimuli may be, e.g., videos, pictures, or any other type of visual content.
- the physiological responses that may be recorded include, e.g., auditory startle responses, prepulse inhibition, and spontaneous eye blink and eye movements. All these responses may be measured by the analysis module during execution of the method.
- the present disclosure provides a convenient and accessible system for measuring emotional engagement in response to visual stimuli. The system allows remote testing and analysis, which makes it suitable for use in a variety of settings, including research, marketing, and clinical applications.
- the use of auditory startle responses, prepulse inhibition, and spontaneous eye blink and eye movements provides an objective and reliable measure of various metrics that may be useful, e.g., for the benefit of the individual being tested such as for self-improvement or diagnostic purposes.
- the effects of a physical activity can be measured, for example, to determine if the physical activity was effective, if the level of activity was sufficient to provide a detectable benefit, etc. Alternatively, this may include being able to quantify emotional engagement, which can be used to optimize the effectiveness of visual stimuli.
- Example 1 Participants 40 Neurotypical participants aged between 18 and 40 years, were recruited by social media invitations to participate in the study. This sample size is in-line with other eyeblink conditioning research in humans.
- Participants were divided into an active or sedentary group based on their weekly hours of physical activity. The cut-off point for group classification was determined using the lower limit of the WHO guidelines for physical activity in adults aged 18-64 years. Participants doing less than 2.5 hours of moderate intensity or less than 75 minutes of vigorous intensity exercise were in the sedentary group and the other participants were in the active group. Moderate intensity was defined as: “Exercise that increases heart rate but you are still able to hold a conversation” and vigorous as “Exercise that raises your heart rate so that you are unable to speak”. Education level was similar across groups as all subjects either had a university degree or were university students. Furthermore, the average age and hours of sleep per night were similar across groups (see Table 1).
- the eyeblink conditioning experiment consisted of the pairing of a CS with a US (here, a burst of white noise plus activation of the camera’s selfie flash).
- the CS here, a white dot
- the US was presented 400 ms after the onset of the CS and co-terminated with the CS.
- US-only trials the stimuli were presented for 50 ms, 400 ms from trial onset.
- Each eyeblink conditioning session consisted of 10 blocks and a pre-block at the start of each session.
- the pre-block consisted of 3 CS-only trials and 2 US-only trials.
- a mean baseline CR amplitude per subject was determined at session 0. Session 0 was defined as the pre-block CS-only trials from session 1. CR amplitude was determined as the maximum signal amplitude value at 430 ms, for paired and CS-only trials. This time value was chosen to allow for a latency of 30ms following the expected presentation of the US at 400 ms. There is a latency in response to the US (supplemental figure 2) likely due to retinal processing of the flash20.
- CRs were defined as trials with a maximum signal amplitude above 0.10 in a time window ranging from 60 – 750 ms. Additionally, the mean percentage of well-timed CRs was calculated per group. A well-timed CR was defined as a trial with a maximum signal amplitude above 0.10 in a time window between 400 – 500 ms.
- Statistical analysis All statistical analyses and visualisations were done in R 4.3.1. Potential differences between groups in age, average weekly exercise and sleep hours were tested using a one-way ANOVA.
- Locomotor activity signalling via cerebellar mossy fibres may converge with the CS MF signalling hereby facilitating learning. While exercise may have acted directly within the cerebellar cortex to enhance learning, it is unclear why such an effect would differ for active and sedentary individuals.
- the finding that acute exercise facilitates eyeblink conditioning in active but not sedentary individuals may point towards a mechanistic role of neuropeptidergic transmitters and/or neurotrophins. Indeed, both human and animal studies on neuropeptidergic transmitters and neurotrophins show differential effects of acute exercise in active compared to sedentary subjects.
- the dopaminergic, adrenergic and norepinephrinergic pathways which are all catecholaminergic systems that prominently co-release neuropeptides, are upregulated in humans and animals following exercise. While the proposed role of these neurotransmitters in exercise-induced cognitive benefits are frequently studied, their potential influence on associative procedural learning has received less attention. Despite this, there is evidence for a role of neurotransmitters in cerebellar learning. In rabbits, pharmacological monoamine depletion resulted in a dose-dependent reduction in CRs in an eyeblink conditioning task. Additionally, in rats, cerebellar norepinephrine was shown to be involved in the acquisition of CRs.
- the method may include calculating a cognitive load based on the eye movement and eye blink values, where the cognitive load can be the metric of interest or one or more proxies for such metric.
- the effect of physical activity can be seen with these systems.
- non-limiting examples of physical activities that can be considered by the system include exercise, meditation, breathing exercises, sleep, etc.
- Individuals who are engaged with a cardio or fitness program have a distinct phenotype that can be detected with the disclosed techniques. For example, engaged individuals may be more responsive and have less “noisy” results after a fitness program than non-engaged individuals. Examples of the effect of a physical activity on various metrics can be seen in FIG.6A-6C.
- the user may be provided a user interface indicating results of the testing. For example, for self-improvement purposes, or for determining a state of alertness when interacting with certain content.
- a user may receive results in a user interface, e.g., on a watch, phone, etc., indicating their level of engagement in a particular fitness activity.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Pathology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2024259510A AU2024259510A1 (en) | 2023-04-19 | 2024-04-19 | Method and system for measuring brain reflexes and the modulatory effect of engagement and lifestyle |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363460451P | 2023-04-19 | 2023-04-19 | |
| US63/460,451 | 2023-04-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024220761A1 true WO2024220761A1 (en) | 2024-10-24 |
Family
ID=93153382
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/025344 Pending WO2024220761A1 (en) | 2023-04-19 | 2024-04-19 | Method and system for measuring brain reflexes and the modulatory effect of engagement and lifestyle |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2024259510A1 (en) |
| WO (1) | WO2024220761A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200000334A1 (en) * | 2013-05-01 | 2020-01-02 | Musc Foundation For Research Development | Monitoring neurological functional status |
| US20200214559A1 (en) * | 2013-01-25 | 2020-07-09 | Wesley W.O. Krueger | Ocular-performance-based head impact measurement using a faceguard |
| US11093033B1 (en) * | 2019-10-28 | 2021-08-17 | Facebook, Inc. | Identifying object of user focus with eye tracking and visually evoked potentials |
| US20210339043A1 (en) * | 2016-11-17 | 2021-11-04 | Cognito Therapeutics, Inc. | Neural stimulation via visual stimulation |
| US20220326766A1 (en) * | 2021-04-08 | 2022-10-13 | Google Llc | Object selection based on eye tracking in wearable device |
-
2024
- 2024-04-19 AU AU2024259510A patent/AU2024259510A1/en active Pending
- 2024-04-19 WO PCT/US2024/025344 patent/WO2024220761A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200214559A1 (en) * | 2013-01-25 | 2020-07-09 | Wesley W.O. Krueger | Ocular-performance-based head impact measurement using a faceguard |
| US20200000334A1 (en) * | 2013-05-01 | 2020-01-02 | Musc Foundation For Research Development | Monitoring neurological functional status |
| US20210339043A1 (en) * | 2016-11-17 | 2021-11-04 | Cognito Therapeutics, Inc. | Neural stimulation via visual stimulation |
| US11093033B1 (en) * | 2019-10-28 | 2021-08-17 | Facebook, Inc. | Identifying object of user focus with eye tracking and visually evoked potentials |
| US20220326766A1 (en) * | 2021-04-08 | 2022-10-13 | Google Llc | Object selection based on eye tracking in wearable device |
Also Published As
| Publication number | Publication date |
|---|---|
| AU2024259510A1 (en) | 2025-10-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7478786B2 (en) | Method for enhancing cognition and system for implementing same | |
| EP3389483B1 (en) | Device for neurovascular stimulation | |
| Gurler et al. | A link between individual differences in multisensory speech perception and eye movements | |
| Nemrodov et al. | Early sensitivity for eyes within faces: A new neuronal account of holistic and featural processing | |
| US10342472B2 (en) | Systems and methods for assessing and improving sustained attention | |
| EP2819587B1 (en) | Methods, apparatuses and systems for diagnosis and treatment of mood disorders | |
| CN110164249A (en) | A kind of computer on-line study supervision auxiliary system | |
| Elsherif et al. | The perceptual saliency of fearful eyes and smiles: A signal detection study | |
| Toyomura et al. | Speech disfluency-dependent amygdala activity in adults who stutter: Neuroimaging of interpersonal communication in MRI scanner environment | |
| Walsh et al. | Physiological correlates of fluent and stuttered speech production in preschool children who stutter | |
| Perry et al. | Dual tasking influences cough sensorimotor outcomes in healthy young adults | |
| Gebrehiwot et al. | Analysis of blink rate variability during reading and memory testing | |
| Mandel et al. | Brain responds to another person's eye blinks in a natural setting—the more empathetic the viewer the stronger the responses | |
| WO2024220761A1 (en) | Method and system for measuring brain reflexes and the modulatory effect of engagement and lifestyle | |
| Greenwald et al. | A comparison of eye movement desensitization and reprocessing and progressive counting among therapists in training. | |
| Ng et al. | Neurological evidence of diverse self-help breathing training with virtual reality and biofeedback assistance: extensive exploration study of electroencephalography markers | |
| Jara | Pupillary Habituation to Dynamic Audiovisual Media | |
| CN108563322B (en) | Control method and device for VR/AR equipment | |
| Riggs et al. | Association with emotional information alters subsequent processing of neutral faces | |
| Rubino et al. | Oculomotor learning is evident during implicit motor sequence learning | |
| Osaka | Ideomotor response and the neural representation of implied crying in the human brain: An fMRI study using onomatopoeia 1 | |
| US20250009270A1 (en) | Systems and methods for assessing and improving sustained attention | |
| Grillon et al. | Eye-tracking as diagnosis and assessment tool for social phobia | |
| Ross et al. | Do Psychologists Accept, Prefer, and Use Imaginal Exposure Over Other Techniques to Treat Generalized Anxiety? A Survey | |
| de Wit et al. | Delayed pointing movements to masked Müller–Lyer figures are affected by target size but not the illusion |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24793539 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: AU2024259510 Country of ref document: AU Ref document number: 2024793539 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2024259510 Country of ref document: AU Date of ref document: 20240419 Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: KR1020257038314 Country of ref document: KR |
|
| ENP | Entry into the national phase |
Ref document number: 2024793539 Country of ref document: EP Effective date: 20251009 |