König et al., 2016 - Google Patents
Modeling visual exploration in rhesus macaques with bottom-up salience and oculomotor statisticsKönig et al., 2016
View HTML- Document ID
- 1589517693089730857
- Author
- König S
- Buffalo E
- Publication year
- Publication venue
- Frontiers in Integrative Neuroscience
External Links
Snippet
There is a growing interest in studying biological systems in natural settings, in which experimental stimuli are less artificial and behavior is less controlled. In primate vision research, free viewing of complex images has elucidated novel neural responses, and free …
- 230000000007 visual effect 0 title description 20
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
- G06F19/30—Medical informatics, i.e. computer-based analysis or dissemination of patient or disease data
- G06F19/34—Computer-assisted medical diagnosis or treatment, e.g. computerised prescription or delivery of medication or diets, computerised local control of medical devices, medical expert systems or telemedicine
- G06F19/345—Medical expert systems, neural networks or other automated diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Detecting, measuring or recording for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times; Devices for evaluating the psychological state
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Li et al. | Automated detection of cognitive engagement to inform the art of staying engaged in problem-solving | |
| Chudzik et al. | Machine learning and digital biomarkers can detect early stages of neurodegenerative diseases | |
| González-García et al. | Content-specific activity in frontoparietal and default-mode networks during prior-guided visual perception | |
| End et al. | Preferential processing of social features and their interplay with physical saliency in complex naturalistic scenes | |
| Nuthmann et al. | How well can saliency models predict fixation selection in scenes beyond central bias? A new approach to model evaluation using generalized linear mixed models | |
| Hayes et al. | Scan patterns during real-world scene viewing predict individual differences in cognitive capacity | |
| D’Mello et al. | Machine-learned computational models can enhance the study of text and discourse: A case study using eye tracking to model reading comprehension | |
| Lee et al. | When eyes wander around: Mind-wandering as revealed by eye movement analysis with hidden Markov models | |
| Dewhurst et al. | How task demands influence scanpath similarity in a sequential number-search task | |
| CN109152559A (en) | For the method and system of visual movement neural response to be quantitatively evaluated | |
| Schlesinger et al. | Prediction-learning in infants as a mechanism for gaze control during object exploration | |
| Endo et al. | A convolutional neural network for estimating synaptic connectivity from spike trains | |
| Fabiano et al. | Gaze-based classification of autism spectrum disorder | |
| US20250040847A1 (en) | Psychological exam system based on artificial intelligence and operation method thereof | |
| Stocco et al. | Individual differences in reward‐based learning predict fluid reasoning abilities | |
| Arslan et al. | Towards emotionally intelligent virtual environments: classifying emotions through a biosignal-based approach | |
| Hirt et al. | Measuring emotions during learning: lack of coherence between automated facial emotion recognition and emotional experience | |
| Blything et al. | The human visual system and CNNs can both support robust online translation tolerance following extreme displacements | |
| CN116483209A (en) | Cognitive disorder man-machine interaction method and system based on eye movement regulation and control | |
| Zhang et al. | Human-centered intelligent healthcare: explore how to apply AI to assess cognitive health | |
| König et al. | Modeling visual exploration in rhesus macaques with bottom-up salience and oculomotor statistics | |
| Margolles et al. | Unconscious manipulation of conceptual representations with decoded neurofeedback impacts search behavior | |
| Ahmed et al. | Computational intelligence in detection and support of autism spectrum disorder | |
| Chen | Cognitive load measurement from eye activity: acquisition, efficacy, and real-time system design | |
| Djambazovska et al. | The Impact of Scene Context on Visual Object Recognition: Comparing Humans, Monkeys, and Computational Models |