CN120894817A - An automatic image adjustment device based on human eye parallax - Google Patents
An automatic image adjustment device based on human eye parallaxInfo
- Publication number
- CN120894817A CN120894817A CN202511434130.0A CN202511434130A CN120894817A CN 120894817 A CN120894817 A CN 120894817A CN 202511434130 A CN202511434130 A CN 202511434130A CN 120894817 A CN120894817 A CN 120894817A
- Authority
- CN
- China
- Prior art keywords
- user
- image
- module
- eyes
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/182—Level alarms, e.g. alarms responsive to variables exceeding a threshold
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention relates to the field of image processing and discloses an automatic image adjusting device based on human eye parallax, which comprises an image acquisition module, a user physiological characteristic initialization module, a processing module, an image adjusting module, a display module and a visual health monitoring module, wherein the image acquisition module is used for acquiring images of two eyes of a user in real time, the user physiological characteristic initialization module is used for measuring and storing physiological parameters of the user, the processing module is used for determining and predicting target three-dimensional space positions of the two eyes of the user according to the images of the two eyes of the user and the physiological parameters of the stored user, the image adjusting module is used for generating an adaptive image adaptive to the two eyes of the user, the display module is used for displaying the adaptive image, and the visual health monitoring module is used for evaluating visual fatigue of the user based on time sequence data of the target three-dimensional space positions. The invention realizes the ultra-low delay image following effect by performing prospective pre-judgment on the eye movement state of the user, and inhibits the image tearing and ghost generated when the user moves the head pose rapidly.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an automatic image adjusting device based on human eye parallax.
Background
Currently, in order to construct immersive visual experience in the fields of Virtual Reality (VR), augmented Reality (AR), 3D display, and the like, precise management of human eye parallax is a basic and core technology. Currently, various technical paths have been developed in the industry to achieve stereoscopic image display and adjustment. One type of mainstream solution is to track the head or eyeball position of a user by integrating a sensor such as a camera, so that the system can dynamically adjust the image output according to the change of the viewpoint of the user. Furthermore, in many consumer-grade display devices, techniques of fixing parallax barriers or lens arrays are also widely employed to provide 3D visual effects without requiring special glasses to be worn. Meanwhile, in order to adapt to the perception difference of different users, a part of the system also provides a manual adjustment mechanism, so that the user is allowed to conduct personalized fine adjustment on parallax parameters through software or a physical interface.
For the related art, although development of the three-dimensional display technology is promoted to a certain extent, in practical application, the design characteristics of the related art also bring a space for further optimization for user experience. First, in a dynamic adjustment system, there is an inherent processing delay from the sensor capturing the user motion to the final completion of image adjustment and presentation. When the user makes a quick movement, this delay can lead to a momentary mismatch between vision and somatosensory, thereby affecting the fluency of the experience. Second, for solutions that employ fixed parallax or rely on manual adjustment by the user, it is still challenging to precisely adapt to the unique inter-pupillary distance of each user, and a generic and inaccurate setting may affect long-term viewing comfort. Finally, the design goals of existing image conditioning systems are primarily focused on the realization of visual effects, while less involved is the ability to utilize system tracking data to analyze user status, such as assessing visual fatigue and providing corresponding feedback.
Disclosure of Invention
The invention aims to provide an automatic image adjusting device based on human eye parallax, which solves the problems of poor user experience and visual fatigue caused by the fact that the existing visual adjusting device is lagged in image adjustment, cannot adapt to personal physiological differences and lacks health monitoring.
In order to achieve the above purpose, the invention is realized by the following technical scheme:
an automatic image adjusting device based on human eye parallax, comprising:
the image acquisition module is used for acquiring images of the eyes of a user in real time;
the user physiological characteristic initializing module is connected with the image acquisition module and is used for measuring and storing physiological parameters of a user;
The processing module is connected with the image acquisition module and the user physiological characteristic initialization module and is used for determining and predicting the target three-dimensional space position of the eyes of the user according to the images of the eyes of the user and the stored physiological parameters of the user;
the image adjustment module is connected with the processing module and is used for generating an adaptive image adaptive to the eyes of the user according to the target three-dimensional space position of the eyes of the user;
the display module is connected with the image adjustment module and used for displaying the adaptive image;
And the visual health monitoring module is connected with the processing module and is used for evaluating the visual fatigue of the user based on the time sequence data of the target three-dimensional space position.
Preferably, the image acquisition module includes:
The binocular infrared camera unit is used for synchronously collecting infrared stereoscopic images of the eyes of a user;
And the synchronous control unit is used for controlling the synchronous acquisition frequency of the binocular infrared camera unit.
Preferably, the user physiological characteristic initialization module includes:
the calibration guiding unit is used for presenting a calibration pattern for guiding the user to look at;
and the interpupillary distance calculating unit is used for calculating and storing the interpupillary distance of the user as the physiological parameter according to the image acquired when the user gazes at the calibration pattern.
Preferably, the processing module includes:
The three-dimensional coordinate reconstruction unit is used for calculating the current three-dimensional space positions of the two eyes of the user according to the infrared stereoscopic images of the two eyes of the user;
and the time sequence state prediction unit is used for predicting the target three-dimensional space position according to the time sequence data of the current three-dimensional space position.
Preferably, the image adjustment module includes:
A virtual camera setting unit configured to set the target three-dimensional space position as a position of a virtual camera in a three-dimensional rendering scene;
a view cone constructing unit, configured to construct an asymmetric viewing view cone according to the position of the virtual camera and the physical boundary of the display module;
and the image rendering unit is used for generating the adaptive image according to the asymmetric observation view cone.
Preferably, the display module is an organic light emitting diode display screen supporting refresh rate of more than or equal to 120 Hz.
Preferably, the visual health monitoring module comprises:
An eye movement feature extraction unit for extracting eye movement features of blink frequency, glance feature and fixation stability from time series data of the three-dimensional space position of the target;
and the fatigue evaluation unit is used for calculating the visual fatigue index of the user according to the eye movement characteristics and a preset health model, and triggering early warning when the index exceeds a preset threshold value.
Preferably, the interpupillary distance calculation unit calculates and stores the interpupillary distance of the user by calculating the euclidean distance between the three-dimensional space coordinates of the pupil center of the left eye and the three-dimensional space coordinates of the pupil center of the right eye, and the relationship is:
;
In the formula, For the distance between the pupils,X, Y and the Z axis of the left eye three-dimensional coordinate vector respectively,X, Y and the Z-axis of the right eye three-dimensional coordinate vector, respectively.
Preferably, the time sequence state prediction unit predicts the target three-dimensional space position by adopting a Kalman filtering model, and combines the current measured value by updating a relation through the following statesCorrecting a priori state estimatesTo generate a posterior optimal state estimate:
;
In the formula,Representative of at the momentIn combination with the measurement at this moment, an optimal estimate of the state of the device,Representative of at the momentBefore the measured value is obtained, only according to the last timeThe state of the device predicted by the motion model,In order for the kalman gain to be achieved,Representative of at the momentThe three-dimensional space position of the eyes of the user, which is actually measured by the sensor,Is an observation matrix.
Preferably, in calculating the visual fatigue index of the user by the fatigue evaluation unit, the visual fatigue index is calculated by the following weighted model:
;
In the formula,Average blink frequency, glance characteristics and gaze stability statistics over a time window, respectively; Is a normalization function; For the preset weight coefficient, the weight coefficient is set, Is an index of visual fatigue.
In summary, the present invention includes at least one of the following beneficial technical effects:
1. The invention realizes the image following effect of ultralow delay by performing prospective prejudgement on the eye movement state of the user, and compared with passive measurement in the prior art, the image adjustment is always delayed from the real viewpoint of the user.
2. According to the invention, through accurately calibrating and recording the unique interpupillary distance of each user, the final image adjustment has high physiological suitability, and compared with the design of the existing fixed parallax technology and the tedious and inaccurate manual adjustment, the scheme provides the real individual customized visual experience for the user, and greatly relieves the visual convergence adjustment conflict and uncomfortable feeling generated by long-time watching.
3. Compared with the conventional image adjusting device which is only used as a display tool, the visual fatigue degree quantitative evaluation of the user is realized by multiplexing the eye movement tracking data stream, and the consideration of the health state of the user is completely ignored.
Drawings
FIG. 1 is a diagram of a device module architecture of the present invention;
FIG. 2 is a schematic diagram of an image acquisition module according to the present invention;
FIG. 3 is a schematic diagram of a user physiological characteristic initialization module according to the present invention;
FIG. 4 is a schematic diagram of a process module according to the present invention;
FIG. 5 is a block diagram of an image adjustment module according to the present invention;
FIG. 6 is a schematic diagram of a display module according to the present invention;
fig. 7 is a schematic diagram of a visual health monitoring module according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to fig. 1 to 7.
Referring to fig. 1, the present invention provides an automatic image adjusting device based on human eye parallax, comprising:
the image acquisition module is used for acquiring images of the eyes of a user in real time;
the user physiological characteristic initialization module is connected with the image acquisition module and is used for measuring and storing physiological parameters of a user;
The processing module is connected with the image acquisition module and the user physiological characteristic initialization module and is used for determining and predicting the target three-dimensional space positions of the two eyes of the user according to the images of the two eyes of the user and the physiological parameters of the stored user;
the image adjustment module is connected with the processing module and is used for generating an adaptive image adaptive to the eyes of the user according to the target three-dimensional space position of the eyes of the user;
The display module is connected with the image adjustment module and used for displaying the adaptive image;
and the visual health monitoring module is connected with the processing module and is used for evaluating the visual fatigue of the user based on time sequence data of the target three-dimensional space position.
Specifically, when the device is first started or actively triggered by a user, the user physiological characteristic initialization module begins to work. The module can control the display module to display high-contrast visual standard points on a plurality of preset positions of the screen sequentially or simultaneously. At the same time, the device will guide the user to focus the line of sight steadily one by one on these calibration points through screen information or voice prompts.
At the moment that the user looks at a specific calibration point, the user physiological characteristic initialization module instructs the image acquisition module to capture one or more stereoscopic image pairs containing the eyes of the user in a high-precision synchronous mode. After the image is acquired, the module analyzes the image through an internal algorithm, accurately locates the center pixels of the pupils of the left eye and the right eye, and reconstructs the accurate coordinates of the centers of the two pupils in a three-dimensional space by utilizing the triangulation principle of binocular stereoscopic vision. By calculating the Euclidean distance between the two three-dimensional coordinate points, the device can obtain the inter-pupillary distance (IPD) data unique to the user. The data is permanently stored as a core physiological parameter and transmitted to a processing module to provide an accurate personalized benchmark for subsequent adjustment of all images, thereby ensuring comfort and accuracy of adjustment.
After the calibration is completed, the device enters a real-time closed-loop adjustment work flow of the core. In this process, the image acquisition module continuously captures real-time stereoscopic image pairs of both eyes of the user at a high frequency and transmits this dynamic, raw image data stream to the processing module without delay. The processing module immediately executes the core operation task after receiving each frame of image data. Firstly, the actual three-dimensional space position of the eyes of a user at the current physical moment is accurately calculated by a stereo matching algorithm and combining personal interpupillary distance parameters stored in a calibration stage.
However, the device does not directly use this current location. To overcome the inherent device delays from image acquisition to final display, the processing module may utilize its internal timing prediction algorithm, such as a Kalman filter model. The model comprehensively considers the historical position data and the current measured value, not only smoothly denoises the position signal, but also accurately predicts the 'target three-dimensional space position' which is about to be reached by the eyes of the user in the next display frame. The predictive prediction position is the key point of the device for realizing ultralow delay and eliminating picture smear and jitter.
The predicted three-dimensional spatial position of the target is immediately sent to the image adjustment module as an accurate instruction. The image adjustment module dynamically sets left and right virtual cameras in the three-dimensional rendering environment at predicted target positions according to the instruction. It then builds an asymmetric viewing cone for each of the two virtual cameras to ensure that the correct image is obtained from an actual point of view that the user is off-center, yet is consistent with a physical perspective. The module then completes rendering the scene based on this setting, generating a pair of adapted images that perfectly match the user's future point of view. The pair of adapted images is ultimately transferred to a display module for accurate presentation to the user, thereby completing a seamless adjustment cycle.
Meanwhile, the visual health monitoring module is used as a parallel intelligent thread to work together with the processing module. It continuously receives the smoothed high-precision eye movement position time sequence data output by the processing module. The module performs background analysis on the data, extracts key biological indicators such as blink frequency, gaze stability, glance behavior and the like, and dynamically evaluates the visual fatigue of the user according to a preset health model. When the evaluation result exceeds the safety threshold, the module can actively trigger the prompt function of the device to suggest the rest of the user, so that the intelligent care on the health of the user is realized while the core visual experience is not influenced.
Referring to fig. 2, the image acquisition module includes:
The binocular infrared camera unit is used for synchronously collecting infrared stereoscopic images of the eyes of a user;
And the synchronous control unit is used for controlling the synchronous acquisition frequency of the binocular infrared camera unit.
Specifically, the image acquisition module has the core function of capturing eye dynamics of a user accurately at high frequency and providing raw data input for subsequent processing and analysis. The physical structure of the module illustratively includes a binocular infrared camera unit and a synchronization control unit. The synchronous control unit is electrically connected to the binocular infrared camera unit.
A binocular infrared camera unit, preferably consisting of two physically separated infrared CMOS sensors. The two sensors are mounted at a predetermined, fixed distance, which is the baseline distance of the device. The infrared sensor can effectively reduce interference of ambient visible light on eye image capture.
And a synchronization control unit configured to simultaneously transmit one unified hardware trigger signal to the two infrared CMOS sensors. This ensures that both sensors are exposed and capture images at exactly the same point in time, thereby obtaining right and left stereo image pairs, denoted IL and IR, that are exactly synchronized in time.
To further improve image quality, infrared CMOS sensors preferably employ global shutter technology. The global shutter can enable all pixel points of the whole sensor to complete exposure at the same instant, effectively avoids image distortion caused by rapid movement of the head or eyeballs of a user, and ensures high geometric consistency of the acquired stereo image pair.
When the device works, the synchronous control unit of the image acquisition module continuously transmits a trigger signal at a frequency not lower than 120 Hz. After receiving the signal, the binocular infrared camera unit captures an infrared stereo image pair and directly transmits the infrared stereo image pair as a continuous and real-time data stream to a subsequent connected processing module. The direct data transmission path can significantly reduce delay caused by data transfer in a device bus or a memory.
In order for the processing module to be able to correctly interpret the stereo image pair output by the image acquisition module and reconstruct three-dimensional spatial information therefrom, the module is associated with a set of pre-calibrated parameters. This set of parameters includes, but is not limited to, the baseline distance of the binocular infrared camera unit and the internal parameters of each individual camera.
The internal parameters include the focal length of the camera and principal point coordinates. Through the parameters, the processing module can convert the pixel points and the parallax values thereof in the two-dimensional image into physical coordinates in the three-dimensional space.
At this time, the processing module uses the triangulation principle to calculate the coordinates of the image at positions calculated from the stereo image pair IL and IR by using the following relationParallax value of pixel point at positionConverting into depth coordinates of the point under a camera coordinate system:
;
In the formula,Representing depth coordinates of the pixel points in a three-dimensional space; representing the equivalent focal length of the camera; Representing a baseline distance of the binocular infrared camera unit; Expressed in image coordinates The horizontal pixel offset, i.e., parallax, of the pixel point in the left and right images.
In obtaining depth coordinatesThen, the other two space coordinates of the pointAndThe complete three-dimensional coordinate reconstruction can be further calculated by the following relation:
;
;
In the formula, Two-dimensional image pixel coordinates; is the focal length of the pixel, is an internal parameter of the camera, Is the principal point coordinates.
Referring to fig. 3, the user physiological characteristic initialization module includes:
the calibration guiding unit is used for presenting a calibration pattern for guiding the user to look at;
The pupil distance calculating unit is used for calculating and storing pupil distance of the user as a physiological parameter according to the image acquired when the user looks at the calibration pattern;
The interpupillary distance calculation unit calculates and stores the interpupillary distance of the user by calculating the euclidean distance between the three-dimensional space coordinates of the pupil center of the left eye and the three-dimensional space coordinates of the pupil center of the right eye of the user, and the relation is:
;
In the formula, For the distance between the pupils,X, Y and the Z axis of the left eye three-dimensional coordinate vector respectively,X, Y and the Z-axis of the right eye three-dimensional coordinate vector, respectively.
In particular, the implementation of the user physiological characteristic initialization module begins with a calibration boot phase. First, the calibration guiding unit is activated, which presents one or more preset calibration patterns to the user via the display module. For example, the calibration pattern may be a bright spot, cross, or other high contrast pattern that appears sequentially or simultaneously at different locations on the screen.
The device can guide the user to stably watch the sight on the calibration pattern through voice or text prompt. During the fixation period of the user, the image acquisition module synchronously acquires at least one group of infrared stereoscopic images containing the whole areas of the eyes of the user.
The acquired infrared stereoscopic image pair is then transferred to a processing module. The three-dimensional coordinate reconstruction unit in the processing module processes the image pair, and the positions of the left eye pupil center and the right eye pupil center of the user in a preset three-dimensional space coordinate system are accurately calculated through stereo matching and a triangulation algorithm.
After obtaining the three-dimensional coordinates of the pupil centers of both eyes, the interpupillary distance calculation unit performs a calculation operation. Specifically, the euclidean distance between the three-dimensional space coordinates of the center of the pupil of the left eye of the user and the three-dimensional space coordinates of the center of the pupil of the right eye is calculated to finally determine the interpupillary distance of the user. The calculated relationship is defined by the following formula:
;
In the formula, For the distance between the pupils,X, Y and the Z axis of the left eye three-dimensional coordinate vector respectively,X, Y and the Z-axis of the right eye three-dimensional coordinate vector, respectively.
In order to improve the robustness of the measurement result, the device can repeatedly execute the calibration and calculation processes for a plurality of times, and average the pupil distance values calculated for a plurality of times so as to eliminate the potential error of single measurement.
Finally, the module will determine the final interpupillary distance valueStored in a non-volatile memory unit of the device and associated with the current user account or device identifier. This stored parameter will serve as a key personal reference in all subsequent real-time parallax adjustment operations.
Referring to fig. 4, the processing module includes:
The three-dimensional coordinate reconstruction unit is used for calculating the current three-dimensional space positions of the two eyes of the user according to the infrared stereoscopic images of the two eyes of the user;
the time sequence state prediction unit is used for predicting the three-dimensional space position of the target according to the time sequence data of the current three-dimensional space position;
the time sequence state prediction unit predicts the three-dimensional space position of the target by adopting a Kalman filtering model, and updates the relation by combining the following states and the current measured value Correcting a priori state estimatesTo generate a posterior optimal state estimate:
;
In the formula,Representative of at the momentIn combination with the measurement at this moment, an optimal estimate of the state of the device,Representative of at the momentBefore the measured value is obtained, only according to the last timeThe state of the device predicted by the motion model,In order for the kalman gain to be achieved,Representative of at the momentThe three-dimensional space position of the eyes of the user, which is actually measured by the sensor,Is an observation matrix.
In particular, the hardware carrier of the processing module may be an FPGA or ASIC chip to meet the requirements of the device for high-speed parallel computation and low processing delay. The processing module may include a three-dimensional coordinate reconstruction unit and a time-series state prediction unit.
The three-dimensional coordinate reconstruction unit is used for receiving the synchronous stereo image pair from the image acquisition module and calculating the original three-dimensional space position of the eyes of the user at the current moment according to the synchronous stereo image pair.
The unit first performs a stereo matching algorithm on the stereo image pair to generate a dense disparity map representing the horizontal displacement of corresponding pixels in the left and right images.
To balance computational efficiency with reconstruction accuracy, the stereo matching algorithm may preferably employ a semi-global block matching algorithm.
After the parallax map is obtained, the three-dimensional coordinate reconstruction unit further converts the pupil center pixel position in the parallax map and the corresponding parallax value thereof into three-dimensional space coordinates under a camera coordinate system through a triangulation principle based on the internal and external parameters of the calibrated binocular camera module.
The current three-dimensional space position calculated by the unit is outputted to the time sequence state prediction unit as an instantaneous measurement value containing sensor noise.
The time sequence state prediction unit receives the time sequence data stream of the current three-dimensional space position from the three-dimensional coordinate reconstruction unit, and outputs a smooth and prospective target three-dimensional space position through a time sequence filtering and prediction algorithm. This process can effectively compensate for sensor delays and data processing delays inherent to the device.
In a specific embodiment, the time sequence state prediction unit establishes a dynamic model for describing the eyeball motion state, and adopts a Kalman filtering algorithm to perform optimal state estimation on the model. First, a six-dimensional state vector is created that contains the three-dimensional position component and the three-dimensional velocity component of the eye.
The state vector is then continuously estimated by two recursive steps of the kalman filter algorithm, namely the prediction step and the update step.
In the prediction step, the unit is based on the last timeOptimal state estimation of (a)And a state transition matrix F, calculating the current momentPrior state estimation of (c)The relation is as follows:
;
In the formula, Representing a state transition matrix meter,Representative of at the momentBefore the measured value is obtained, only according to the last timeThe state of the device predicted by the motion model,Is an optimal state estimate.
At the same time, the unit updates the a priori error covarianceThe relation is as follows:
;
In the formula, For the posterior error covariance of the previous moment,In order to process the noise covariance matrix,As the posterior error covariance matrix at the previous moment,The state transition matrix, the process noise covariance matrix,Representing the uncertainty of the predictive model.
In the updating step, the unit receives the current position measurement value from the three-dimensional coordinate reconstruction unitAnd first calculate the Kalman gainThe relation is as follows:
;
In the formula, To map the state vector to the observation matrix of the measurement space,In order to measure the noise covariance matrix,Is the kalman gain.
Subsequently, the unit combines the measurementsCorrecting the prior state estimation to obtain the optimal posterior state estimation at the current momentThe relation is as follows:
;
In the formula, Representative of at the momentIn combination with the measurement at this moment, an optimal estimate of the state of the device,Representative of at the momentBefore the measured value is obtained, only according to the last timeThe state of the device predicted by the motion model,In order for the kalman gain to be achieved,Representative of at the momentThe three-dimensional space position of the eyes of the user, which is actually measured by the sensor,Is an observation matrix.
Finally, the unit updates the posterior error covariance and, finally, the unit updates the posterior error covarianceFor the next iteration, the relation is:
;
In the formula, Is a matrix of units which is a matrix of units,The posterior error covariance is updated for the cell.
Finally, the time sequence state prediction unit estimates the optimal posterior stateThe three-dimensional position components contained in the three-dimensional position information are used as target three-dimensional space positions of eyes of a user at the next moment and are output to an image adjusting module and a visual health monitoring module.
Referring to fig. 5, the image adjustment module includes:
A virtual camera setting unit configured to set a target three-dimensional space position as a position of a virtual camera in a three-dimensional rendering scene;
the view cone construction unit is used for constructing an asymmetric observation view cone according to the position of the virtual camera and the physical boundary of the display module;
and the image rendering unit is used for generating an adaptive image according to the asymmetric observation view cone.
In particular, the image adjustment module is physically implemented, its functions being carried by a Graphics Processing Unit (GPU) and implemented by running a specific set of software instructions. For example, the internal functional logic of the image adjustment module may be divided into several cooperating units.
The image adjustment module comprises a virtual camera setting unit, a view cone constructing unit and an image rendering unit. These units logically work sequentially, converting the abstract three-dimensional spatial position data into a final two-dimensional pixel image.
First, the virtual camera setting unit receives vector data including target three-dimensional space positions of left and right eyes of a user, which is output from the processing module. The unit sets the two three-dimensional spatial positions directly as the exact coordinates of the left and right virtual cameras in the three-dimensional rendered scene for generating the stereoscopic image pair.
Then, the view cone constructing unit constructs asymmetric viewing cones for the left and right virtual cameras, respectively, based on the positions of the virtual cameras. This is a key step to correct perspective distortion caused by the user's viewpoint being offset from the screen center normal.
The method comprises the following steps of firstly, establishing a three-dimensional coordinate system taking the lower left corner of the display module as an origin. The physical plane of the display module is located on the X-Y plane of the coordinate system.
The three-dimensional position of the monocular target of the user, i.e. the position of the virtual camera, is expressed in the coordinate system as. The near clipping plane of the display module is set as a rectangular area, and the coordinates of four corner points are respectively as follows,,And。
The core task of the view cone construction unit is to calculate a projection matrixThe matrix is capable of correctly projecting points in a three-dimensional scene into an asymmetric view cone defined by the virtual camera position and the near clipping plane. The projection matrixCan be defined by the following formula:
;
In the formula, A distance from the virtual camera to the near clipping plane; distance from the virtual camera to the far clipping plane; ,,, Left, right, lower, upper boundary coordinates of the near clipping plane rectangle.
The values of these boundary coordinates are dynamically calculated based on the virtual camera position and the physical dimensions of the display module, ensuring that the line of sight is correctly projected from the virtual camera position to the four corners of the display plane. The calculation relation is as follows:
;
In the formula, Three-dimensional coordinates of the virtual camera in a coordinate system; minimum and maximum coordinates of the display module in the X-axis direction; Minimum and maximum coordinates of the display module in the Y-axis direction; the preset distance from the virtual camera to the near clipping plane is set.
Through the above steps, the view cone construction unit can calculate its unique, accurate asymmetric projection matrices for the left and right eyes of each frame, respectively.
Finally, the image rendering unit receives the projection matrix and applies it to the projective transformation stage in the three-dimensional rendering pipeline. The image rendering unit multiplies all vertex data in the scene by the projection matrix to complete the transformation from the viewing space to the clipping space by using the parallel computing power of the graphics processing unit.
And finally generating a frame of two-dimensional image after the subsequent standard graphic processes such as cutting, perspective division, view port transformation and the like. The pixel content of the image has perspective that fully matches the current real viewpoint position of the user. The process is performed once for each of the left and right eyes to generate a pair of complete adapted images adapted to both eyes of the user.
Referring to FIG. 6, the display module is an OLED display screen supporting refresh rate of 120Hz or more.
Specifically, the display module adopts an organic light emitting diode, namely an OLED display screen. The OLED display screen has the advantages of high contrast, wide color gamut and high response speed due to the self-luminous characteristic. The method can ensure that dark part details and bright part details in the adaptive image can be clearly presented, and the sense of reality of stereoscopic vision is enhanced.
Moreover, in order to cooperate with the high frequency sensing and prediction of the front end of the device, the display module should support a display screen with a refresh rate of 120Hz or higher. The high refresh rate ensures that the adapted image generated by the image adjustment module for each instant of predicted position is quickly updated on the screen with low delay, thereby minimizing the delay from image generation to perception by the user at the physical presentation level.
Meanwhile, the adaptive image received by the display module is specially generated by the image adjusting module according to the target three-dimensional space position output by the processing module. The adapted image contains two images with accurate parallax rendered for the left and right eyes of the user, respectively. Each frame of adapted image is generated corresponding to a predicted user viewpoint position at the next moment.
In a specific working process, a display controller of the display module is synchronous with the rendering output of the image adjusting module. When a new adapted image of a frame is generated, the image is immediately transferred to the frame buffer of the display module. In the next display refresh period, the display module physically presents the content of the frame-adapted image.
And in order to realize stereoscopic vision, the display module may be used in an optical structure separating left and right eye images. In an application scenario of a virtual reality or augmented reality headset, the display module may include two independent display screens, each corresponding to a user's left and right eyes. In an application scenario of the naked eye 3D display device, the display module may integrate a lenticular lens array or a parallax barrier in front of the display screen, so as to accurately guide the pixel light in the adaptive image to the left and right eyes of the user respectively.
Referring to fig. 7, the visual health monitoring module includes:
An eye movement feature extraction unit for extracting eye movement features of blink frequency, glance feature and fixation stability from time series data of the three-dimensional space position of the target;
The fatigue evaluation unit is used for calculating the visual fatigue index of the user according to the eye movement characteristics and a preset health model, and triggering early warning when the index exceeds a preset threshold value;
in the calculation of the visual fatigue index of the user by the fatigue evaluation unit, the visual fatigue index is calculated by the following weighted model :
;
In the formula,Average blink frequency, glance characteristics and gaze stability statistics over a time window, respectively; Is a normalization function; For the preset weight coefficient, the weight coefficient is set, Is an index of visual fatigue.
Specifically, the visual health monitoring module is in signal connection with the processing module. The connection enables the visual health monitoring module to continuously, non-blocking receive a time-series data stream of the three-dimensional spatial position of the user's binocular target output by the processing module, the data stream being smoothed and predictive.
The vision health monitoring module multiplexes this high-precision eye movement data without adding additional hardware sensors or perception overhead to the health monitoring function. The module can realize dynamic evaluation and active early warning of visual fatigue of the user by carrying out depth analysis on the data stream, so that the intelligent care function of visual health of the user is increased while self-adaptive visual experience is provided, and the added value of the device is improved.
Specifically, the visual health monitoring module may physically or logically include an eye movement feature extraction unit and a fatigue evaluation unit.
And the eye movement characteristic extraction unit is connected with the output end of the processing module and is used for calculating and extracting a group of biological key indexes capable of representing the visual state of the user in real time from the time sequence data of the continuous target three-dimensional space position. The implementation process of the unit is as follows:
First, a rolling time series data analysis window is established, and the length of the window may be set to 60 seconds, for example. The eye movement feature extraction unit analyzes the data within the time window to extract eye movement features including at least one or more of:
First, blink frequency. Blink events are identified by analyzing the continuity of the time series data stream. A blink event is determined when there is a brief, preset pattern of loss or interruption of the eye position signal, or when a characteristic rapid closing and opening movement of the eyelid position is detected. The unit counts the blink events within the window to calculate the average blink frequency per unit time.
Second, glance characteristics. And obtaining the speed vector of the eyeball motion by carrying out first-order derivation on the three-dimensional space position data. When the modulus of the velocity vector exceeds a preset threshold, it is determined that a saccade motion has occurred. The unit further counts the frequency of occurrence, average motion amplitude or average peak velocity of saccadic movements within the window to quantify the degree of activity of the eye in switching between different gaze points.
Third, fixation stability. When the modulus of the velocity vector of the eyeball motion is lower than another preset threshold value, the eyeball is judged to be in a gazing state. During this state, the eye movement feature extraction unit calculates the standard deviation of the three-dimensional spatial position point sequence of the eyeball, the value of which standard deviation can characterize the degree of micro-tremor of the eyeball, i.e. the fixation stability, of the user when the user is fixation to a single target.
Subsequently, the fatigue evaluation unit receives the quantized eye movement feature value output by the eye movement feature extraction unit. The unit performs comprehensive weighted calculation on the characteristic values based on a preset visual health model to generate a single visual fatigue index capable of dynamically reflecting the current visual fatigue level of the user.
At this time, the fatigue evaluation unit calculates the visual fatigue index by a weighted linear modelThe specific calculation relation is as follows:
;
In the formula, Representing the finally calculated visual fatigue index as a composite score; representing preset weight coefficients, wherein the values of the weight coefficients are non-negative and are usually 1, and the weight coefficients are respectively used for representing the importance degree of different eye movement characteristics in the comprehensive evaluation of fatigue; Representing a normalization function, which is used for mapping the original eye movement characteristic values with different dimensions and value ranges to a unified and standardized interval, such as a [0,1] interval, and reflecting the nonlinear relation between each characteristic and the fatigue degree; representing the calculated average blink frequency within the time window; Representing computed glance feature statistics, such as average glance frequency or average glance amplitude, over a time window; Representing calculated gaze stability statistics, such as mean position standard deviation, over a time window.
Finally, the fatigue evaluation unit compares the visual fatigue index VFIVFI calculated in real time with a preset safety threshold. If it isA value exceeding the safety threshold indicates a high degree of visual fatigue for the user. At this point, the fatigue evaluation unit is configured to generate an early warning signal which may drive the device to trigger a user prompt, for example to present a text or an icon on the display module or to sound through the audio device, to suggest a rest to the user, thereby achieving an active health intervention.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511434130.0A CN120894817A (en) | 2025-10-09 | 2025-10-09 | An automatic image adjustment device based on human eye parallax |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511434130.0A CN120894817A (en) | 2025-10-09 | 2025-10-09 | An automatic image adjustment device based on human eye parallax |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120894817A true CN120894817A (en) | 2025-11-04 |
Family
ID=97503865
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202511434130.0A Pending CN120894817A (en) | 2025-10-09 | 2025-10-09 | An automatic image adjustment device based on human eye parallax |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120894817A (en) |
-
2025
- 2025-10-09 CN CN202511434130.0A patent/CN120894817A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109558012B (en) | Eyeball tracking method and device | |
| JP5887026B2 (en) | Head mounted system and method for computing and rendering a stream of digital images using the head mounted system | |
| Shih et al. | A novel approach to 3-D gaze tracking using stereo cameras | |
| JP6027764B2 (en) | Mirror system and control method thereof | |
| KR101836409B1 (en) | Method for estimating a reference posture | |
| CN110325895A (en) | Focus adjustment multi-plane head-mounted display | |
| CN108170279A (en) | The eye of aobvious equipment is moved moves exchange method with head | |
| CN106873778A (en) | A kind of progress control method of application, device and virtual reality device | |
| CN103366381A (en) | Sight line tracking correcting method based on space position | |
| CN107124607A (en) | The naked-eye stereoscopic display device and method of a kind of combination visual fatigue detection | |
| WO2016021034A1 (en) | Algorithm for identifying three-dimensional point of gaze | |
| US11579690B2 (en) | Gaze tracking apparatus and systems | |
| JP2021515302A (en) | Line-of-sight tracking method and equipment | |
| CN207589060U (en) | A naked-eye three-dimensional display device combined with visual fatigue detection | |
| JP2023515205A (en) | Display method, device, terminal device and computer program | |
| JP2023179017A (en) | Electronic apparatus | |
| JP2004333661A (en) | Stereoscopic image display device, stereoscopic image display method, and stereoscopic image display program | |
| CN120065512A (en) | Binocular parallax correction system | |
| JP2016200753A (en) | Image display device | |
| CN108471939A (en) | Pan zone measuring method and device and wearable display equipment | |
| KR100520050B1 (en) | Head mounted computer interfacing device and method using eye-gaze direction | |
| KR101817436B1 (en) | Apparatus and method for displaying contents using electrooculogram sensors | |
| CN120686470A (en) | Image adjustment method based on XR glasses, XR glasses, electronic device and medium | |
| CN120894817A (en) | An automatic image adjustment device based on human eye parallax | |
| CN116503475A (en) | A VRAR binocular 3D target positioning method based on deep learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |