CN115624315B - Eye movement tracking method and device, electronic equipment, computer storage medium and product - Google Patents
Eye movement tracking method and device, electronic equipment, computer storage medium and product Download PDFInfo
- Publication number
- CN115624315B CN115624315B CN202211442435.2A CN202211442435A CN115624315B CN 115624315 B CN115624315 B CN 115624315B CN 202211442435 A CN202211442435 A CN 202211442435A CN 115624315 B CN115624315 B CN 115624315B
- Authority
- CN
- China
- Prior art keywords
- eye image
- region
- target
- pupil
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 107
- 230000004424 eye movement Effects 0.000 title claims abstract description 42
- 210000001508 eye Anatomy 0.000 claims abstract description 288
- 210000001747 pupil Anatomy 0.000 claims abstract description 171
- 238000012545 processing Methods 0.000 claims abstract description 45
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 15
- 230000033001 locomotion Effects 0.000 claims abstract description 14
- 230000004044 response Effects 0.000 claims abstract description 10
- 230000008569 process Effects 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 14
- 210000004087 cornea Anatomy 0.000 claims description 11
- 230000000295 complement effect Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000015654 memory Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000011160 research Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000001179 pupillary effect Effects 0.000 description 5
- 210000000720 eyelash Anatomy 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000011514 reflex Effects 0.000 description 2
- 230000009469 supplementation Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000003945 visual behavior Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The application discloses an eye movement tracking method, an eye movement tracking device, electronic equipment, a computer storage medium and a computer product, and relates to the technical field of computer vision. The specific implementation mode comprises the following steps: acquiring a multi-frame eye image of a target object, wherein the eye image comprises light reflection information; in response to the judgment that the integrity of the pupil area in the eye image is lower than the preset integrity, performing completion processing on the pupil area in the eye image to obtain a first target eye image; determining a pupil center point of the first target eye image; determining a corneal reflection point according to light reflection information of the first target eye image; and determining the eyeball motion condition of the target object according to the pupil center point and the corneal reflection point. The scheme is beneficial to improving the precision and the accuracy of the eye movement tracking, and can enhance the robustness of the pupil-cornea reflection tracking method.
Description
Technical Field
The present application relates to the field of image processing technology, in particular to the field of computer vision technology, in particular to the field of smart medical technology, and in particular to an eye tracking method, an eye tracking device, an electronic device, a non-volatile computer-readable storage medium, and a computer program product.
Background
Eye tracking is a technique used to measure the eye movement, the direction of eye gaze that is most important in eye movement, is controlled by the brain and reflects the brain's behavioral activity. Thus, eye tracking has become a technical means for studying visual behaviors and human behaviors in many fields such as neurocognitive science, psychology, marketing science, user research, and the like.
Eye movement tracking technology has been developed for decades, and technical schemes such as mechanical recording, electro-optic recording and electromagnetic induction have appeared, but the current non-invasive and non-contact eye movement tracking method is generated based on the development of camera technology, infrared technology and computer technology, wherein the most common method is pupil-cornea reflection tracking method. The basic process of the method is to use a near infrared light source to emit infrared light, the near infrared light is reflected at the cornea of the eye, and then the image of the eye with the reflection is collected by a camera. During the movement of the eye, the position of the pupil center changes, and the absolute position of the corneal reflection does not change along with the rotation of the eyeball, so that the movement process of the whole eyeball can be represented by the change of the absolute position of the corneal reflection relative to the position of the pupil center.
Therefore, the existing pupil-cornea reflection tracking method has the characteristics of convenience and high efficiency, and needs to strongly depend on the position judgment of the pupil to acquire the movement process of the eyeball, so that the existing method cannot capture the position of the pupil center when the pupil image is incomplete (such as the condition that the pupil is partially shielded due to squinting, spectacle frame shielding, eyelash shielding, eye makeup shielding and the like), and further the accuracy and the precision of eye movement tracking are influenced.
Disclosure of Invention
Aiming at the technical problems of low precision and inaccuracy of a pupil-cornea reflection tracking method in the prior art, the invention provides an eye tracking method, an eye tracking device, electronic equipment, a non-volatile computer readable storage medium and a computer program product.
According to a first aspect, there is provided an eye tracking method comprising:
acquiring a multi-frame eye image of a target object, wherein the eye image comprises light reflection information;
in response to the judgment that the integrity of the pupil area in the eye image is lower than the preset integrity, performing completion processing on the pupil area in the eye image to obtain a first target eye image;
determining a pupil center point of the first target eye image;
determining a corneal reflection point according to the light reflection information of the first target eye image;
and determining the eyeball motion condition of the target object according to the pupil center point and the cornea reflection point.
According to a second aspect, there is provided an eye tracking device comprising:
the device comprises an image acquisition unit, a processing unit and a control unit, wherein the image acquisition unit is used for acquiring a plurality of frames of eye images of a target object, and the eye images comprise light reflection information;
the image complementing unit is used for performing complementing processing on the pupil area in the eye image in response to the fact that the completeness of the pupil area in the eye image is lower than a preset completeness, and a first target eye image is obtained;
a pupil center determining unit configured to determine a pupil center point of the first target eye image;
a cornea reflection point determining unit for determining a cornea reflection point according to the light reflection information of the first target eye image;
and the eye movement tracking unit is used for determining the eyeball movement condition of the target object according to the pupil center point and the cornea reflection point.
According to a third aspect, there is provided an electronic device comprising: one or more processors; a storage device to store one or more programs that, when executed by one or more processors, cause the one or more processors to implement a method such as any of the embodiments of the eye tracking method.
According to a fourth aspect, a non-transitory computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements a method as in any one of the embodiments of the eye tracking method.
According to a fifth aspect, a computer program product is provided, comprising a computer program which, when executed by a processor, performs a method as in any of the embodiments of the eye tracking method.
According to the scheme of the application, the integrity of the pupil area in the acquired multi-frame eye image is judged, the incomplete phenomenon of the pupil area caused by the fact that the integrity of the pupil area in the eye image is lower than the preset integrity is judged, namely, the squinting exists, and the pupil is partially shielded by an object (such as an eyeglass frame, eyelashes, eye makeup objects and the like), the pupil area in the eye image is subjected to complete compensation processing, a first target eye image with the pupil area reaching the preset integrity (namely, the pupil area is complete) is obtained, then the pupil center point and the cornea reflection point are determined based on the first target eye image, and finally, the eyeball movement condition of the target object is determined according to the pupil center point and the cornea reflection point, so that eye movement tracking is achieved. The method and the device for judging the integrity of the pupil area are provided, and the completeness of the pupil area is supplemented aiming at the pupil area of which the integrity is lower than the preset integrity, so that the pupil center point can be extracted from the pupil area of which the integrity reaches the preset integrity, the problem that the pupil center position cannot be captured due to the fact that a large amount of information is lost due to incomplete or incomplete phenomena of the pupil area is avoided, the problem that the eye movement condition cannot be judged is further avoided, the improvement of the precision and the accuracy of eye movement tracking based on the pupil center is facilitated, and the robustness of the eye movement tracking method based on the pupil center can be enhanced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an eye tracking method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of an eye tracking method according to the present application;
FIG. 4 is a schematic diagram of an embodiment of an eye tracking device according to the present application;
fig. 5 is a block diagram of an electronic device for implementing an eye tracking method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the eye tracking method or eye tracking apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as video applications, live applications, instant messaging tools, mailbox clients, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
Here, the terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The background server may perform processing such as determining an eye movement condition on the received data such as the eye image, and feed back a processing result (e.g., the eye movement condition) to the terminal device.
It should be noted that the eye tracking processing method provided in the embodiment of the present application may be executed by the server 105 or the terminal devices 101, 102, and 103, and accordingly, the eye tracking apparatus may be disposed in the server 105 or the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of an eye tracking method according to the present application is shown. The eye tracking method comprises the following steps:
step 205, determining the eyeball motion condition of the target object according to the pupil center point and the corneal reflection point.
In this embodiment, an execution subject (for example, the server or the terminal device shown in fig. 1) on which the eye tracking method operates may determine the integrity of the pupil region, and perform the completion on the pupil region whose integrity is lower than the preset integrity (that is, the pupil region is complete or incomplete), so that a pupil center point may be extracted from the pupil region whose integrity reaches the preset integrity, thereby avoiding a problem that the pupil center position cannot be captured due to a large amount of information lost due to incomplete or incomplete pupil regions, and further avoiding a problem that the eye movement condition cannot be determined, which is beneficial to improving the precision and accuracy of eye tracking (for example, pupil-cornea reflex tracking method) based on the pupil center, and thus enhancing the robustness of the eye tracking method based on the pupil center.
In some optional implementation manners of this embodiment, the eye movement tracking may be performed based on the pupil center of the supplemented pupil region, or may be performed based on the pupil region whose integrity reaches the preset integrity (i.e., the pupil region is complete) and the pupil center of the supplemented pupil region.
In some optional implementations of the embodiment, the present inventors consider that, in the existing pupil-cornea reflection tracking method, the eyeball activity is determined by the relative position of the cornea reflection point and the pupil, while in the actual use scene, the pupil is very likely to be partially blocked by objects such as eyelashes, glasses frames, eye cosmetics, and the like, and particularly, in the research experiments of neurocognitive science, psychology, marketing science, user research, and the like based on visual information, the phenomenon of squinting of the subject often occurs. The above situations all result in the incompleteness of the image of the pupil area, so that the existing pupil-cornea reflection tracking method cannot capture the position of the pupil center, thereby affecting the accuracy and precision of eye movement tracking. Therefore, the eye movement tracking method is obtained by judging whether the pupil area is complete, completing the incomplete pupil area and the like, so that the whole eye movement process can be stably and accurately recorded, the defect that the eye movement tracking method based on the pupil area cannot help objects such as squinting eyes and eyelashes to partially shield the pupil and the like is overcome, and the precision and the accuracy of the eye movement tracking method based on the pupil area are improved. The above-described steps of determining whether the pupil region is complete and completing an incomplete pupil region may be added to any eye movement tracking method based on the pupil region, for example, to an existing pupil-cornea reflex tracking method.
In some optional implementation manners of this embodiment, the eye image may be obtained by real-time dynamic acquisition by an image acquisition device such as a video camera or a camera during the eye tracking process. The method for acquiring the eye image can adopt a method for acquiring the eye image in the existing pupil-cornea reflection tracking method, for example, infrared light is emitted to the eye of a target object at a certain inclination angle, and the reflection of the pupil has a 'dark pupil' phenomenon, so that the effect that the pupil is darker than the iris is generated; and then the images of the eyes with infrared light reflection information are collected through image collecting equipment such as a camera.
In some optional implementations of the present embodiment, the process of extracting the pupil region and the reflection point of the infrared light on the cornea of the eye in the eye image may be implemented by performing image segmentation based on gray scale. For example, in each frame of eye image, image segmentation is performed based on gray scale, and the region with the highest gray scale value (i.e., the brightest) in the eye image can be determined as the reflection point of infrared light on the cornea, and the position of the reflection point is extracted; the region of the eye image where the gray value is lowest (i.e., darkest) can be determined as the pupil region.
In some optional implementations of the embodiment, in the process of extracting the pupil region in each frame of eye image, the shape and size of the pupil region are not specifically limited in the present application, and may be determined according to specific requirements as long as the pupil region extracted in each frame of eye image is consistent in shape and size.
In some optional implementation manners of this embodiment, in order to accurately and effectively determine whether the integrity of the pupil region in each frame of eye image reaches the preset integrity (i.e., whether the pupil region is intact), the method may be implemented by determining whether the shape of the pupil region is circular, for example, if the determined shape of the pupil region is circular, determining that the integrity of the pupil region reaches the preset integrity, that is, it indicates that the pupil is not partially blocked, and the pupil region is intact; if the shape of the exit pupil area is determined to be not circular, the completeness of the pupil area is judged to be lower than the preset completeness, namely the pupil area is partially shielded, and the pupil area is incomplete.
In some optional implementations of the present embodiment, in order to further accurately determine whether the integrity of the pupil region in each frame of the eye image reaches the predetermined integrity, in the present embodiment, it is further proposed to determine the integrity of the pupil region based on the roundness, for example,
determining the roundness of a pupil region in each frame of the eye image;
judging whether the roundness of the pupil area reaches a preset roundness threshold value or not, if so, judging that the integrity of the pupil area reaches a preset integrity, namely the pupil area is complete; if not, namely the roundness of the pupil area is lower than the preset roundness threshold value, judging that the integrity of the pupil area is lower than the preset integrity, namely the pupil area is incomplete.
In some optional implementation manners of this embodiment, the size of the preset circularity threshold may be specifically determined according to actual situations, the circularity R of a pure circle is 1, and since the pupil region may be a circle or another shape, the preset circularity threshold may be set to 0.8 in order to take into account the accuracy of the determination and the diversity of the shapes of the pupil region. Comparing the roundness R of the pupil area in each frame of eye image with 0.8; if R is more than or equal to 0.8, the integrity of the pupil area reaches the preset integrity, and the pupil area is complete and is a pupil image which is well acquired under the condition that the pupil is not shielded; if R is less than 0.8, the integrity of the pupil region is lower than the preset integrity, and the pupil region is incomplete and is an incomplete or complete pupil image acquired under the condition that the pupil is partially shielded.
In some alternative implementations of the present embodiment, the circularity of the pupillary region in each frame of the eye image may be determined from the ratio between the area of the pupillary region and the square of the perimeter of the pupillary region.
In some optional implementations of the present embodiment, the roundness of the pupil region in each frame of the eye image may be specifically determined by the following formula:
wherein R is roundness, S is the area of the pupillary region, and C is the perimeter of the pupillary region.
In some optional implementations of this embodiment, in order to ensure the accuracy of the complemented pupil region and the feasibility of the complementing process, in this embodiment, it is proposed to complement the pupil region with a lower integrity (i.e. incomplete) than a preset integrity in the same eye tracking process by using an eye image (i.e. the complete pupil region itself, the complete pupil region that is not missing or occluded) of the target object in the same eye tracking process, for example,
determining at least one frame of eye image with the completeness of the exit pupil region reaching the preset completeness in the multiple frames of eye images to obtain at least one frame of second target eye image;
and performing completion processing on the pupil area in the eye image based on the at least one frame of second target eye image to obtain the first target eye image.
The method and the device realize the full consideration of the characteristics of the eye movement tracking and the prior art, do not need to add extra processes in the eye movement tracking and establish a massive eye image database, but utilize a high-quality and complete pupil area image of a target object in a normal eye opening state in the same eye movement tracking process to fill an incomplete pupil area in the same eye movement tracking process, ensure the convenience and feasibility of a scheme, fundamentally use the image of the same individual to fill the pupil area, and are favorable for avoiding precision loss caused by individualized difference.
In some optional implementations of this embodiment, the pupil area in the eye image may be subjected to a completion process based on at least one frame of the second target eye image to obtain the first target eye image, so as to meet a requirement of capturing a pupil center position, so as to improve accuracy and precision of eye movement tracking based on a pupil center (e.g., a pupil-cornea reflection tracking method). In addition, in order to further improve the accuracy of pupil region completion, one or more frames of second target eye images with the integrity of the pupil region reaching the standard integrity may be screened from at least one frame of second target eye image, where the standard integrity is higher than the preset integrity, and then the pupil region in the eye images is subjected to completion processing based on the screened one or more frames of second target eye images to obtain the first target eye image.
In some optional implementation manners of this embodiment, in order to further implement accurate filling of a pupil region and ensure accuracy and precision of the filled pupil region, in this embodiment, it is proposed to determine an area to be filled in the incomplete pupil region first, and then perform accurate filling, for example, determine a gray value of an eye image; determining a region to be compensated of the pupil region in the eye image according to the gray value; and performing completion processing on the region to be completed based on the at least one frame of second target eye image to obtain the first target eye image.
Specifically, in the incomplete pupil area of the eye image, an area having a gray value greater than a preset gray value may be determined as an area to be compensated.
Specifically, the preset gray-scale value may be specifically determined according to actual conditions, and since the pupil area is an area with the lowest gray-scale value (i.e., the darkest) in the eye image, the gray-scale value of the area to be compensated should be greater or brighter than the gray-scale value of the normal pupil area, and therefore, the preset gray-scale value may be determined based on the gray-scale value of the pupil area, which aims to determine, as the area to be compensated, an area with a gray-scale value that is significantly or abnormally greater than the lowest gray-scale value in the eye image in the incomplete pupil area through the preset gray-scale value. For example, the preset gray-level value may be a gray-level value higher than the lowest gray-level value in the eye image by a certain value.
In some optional implementations of this embodiment, in order to further ensure the accuracy and precision of the supplemented pupil region, in this embodiment, it is proposed to select, from at least one second target eye image, an image-supplementing region to be most matched with the region to be supplemented, for example,
determining a first matching content of the eye image, determining a second matching content of the at least one frame of second target eye image;
determining a matching error according to the first matching content of the eye image and the second matching content of the at least one frame of second target eye image, and determining the second target eye image with the minimum matching error as a reference image;
and performing complementation processing on the region to be complemented according to the reference image to obtain the first target eye image.
In some optional implementations of the present embodiment, in order to further ensure a high matching degree between the second target eye image with the minimum matching error and the region to be compensated, in the present embodiment, a method for determining the matching content is proposed, for example,
determining a first reference region containing the region to be compensated in the eye image, and determining pixel points in the first reference region except the region to be compensated as first matching content of the eye image;
in a second target eye image, determining a region corresponding to the region to be compensated as a target region, determining a second reference region including the target region, and determining pixel points except the target region in the second reference region as second matching content of the second target eye image.
Specifically, all features of the pixel points in the first matching content can be matched with the pixel points in the second matching content of each frame of second target eye image respectively to obtain a plurality of matching errors, so that the second target eye image with the minimum matching error, namely the reference image, can be determined from the plurality of matching errors.
For example, the process of determining a reference image through matching content includes the steps of:
1. taking the area of 80 pixel points outside the area to be complemented as first matching content, taking the area of 80 pixel points outside the target area in the second target eye image as second matching content, and respectively matching the first matching content with the second matching content of the second target eye image completely acquired by each Zhang Tong hole area in the database;
2. the matching process is that the target areas of the second target eye images in the database are respectively placed on the areas to be compensated, and the matching errors of the first matching content and the second matching content are respectively calculated aiming at each frame of the second target eye images;
3. calculating a matching error by using Mean Square Error (MSE) based on gray level, and comparing the root mean square of the 80 pixels;
4. and taking the matching with the minimum MSE as a matching result, namely taking the matching corresponding to the second target eye image with the minimum MSE as a reference image.
In some optional implementation manners of this embodiment, in order to ensure accuracy, reality, and effectiveness of the region to be supplemented after the supplementation processing, in this embodiment, the target region in the reference image is adopted to perform the supplementation processing on the region to be supplemented, so as to obtain the first target eye image. Because the determined target region of the reference image is most similar to the region to be compensated under a normal condition (i.e., a condition of no deletion or no occlusion), the accuracy and the real effectiveness of the region to be compensated after the compensation processing can be ensured to the greatest extent by performing the compensation processing on the region to be compensated by using the target region in the reference image.
In some optional implementation manners of this embodiment, in order to ensure accuracy and fineness of a completion effect of a region to be completed, in this embodiment, it is proposed to perform completion processing on the region to be completed according to the reference image, and then fuse a boundary region of the region to be completed to obtain the first target eye image.
Specifically, a poisson fusion method may be adopted to fuse the boundary of the region to be compensated to compensate the region to be compensated.
In some optional implementations of the present embodiment, in order to further ensure the integrity of the pupil region after the completion processing, in the present embodiment, it is proposed to perform multiple completion processing on the region to be completed, for example,
determining whether the pupil area in the eye image subjected to completion processing reaches the preset integrity;
and if not, performing complementation treatment on the pupil area in the eye image again until the completeness of the pupil area after the complementation treatment reaches the preset completeness, and obtaining the first target eye image.
For example, for an incomplete pupil region, the roundness is calculated based on the completed pupil region, and the roundness is compared with 0.8 (preset roundness threshold), if the roundness is greater than 0.8, it indicates that the incomplete pupil region is completely completed, and the completion is not needed; if the roundness is less than 0.8, it means that some incomplete pupil region is not completely complemented yet, and the complementation is continued until the roundness is more than 0.8.
In some optional implementation manners of this embodiment, the process of completing the pupil region with integrity lower than the preset integrity can be dynamically implemented in real time in the eye tracking process, that is, for each frame of eye image acquired at the current time in the eye tracking process, if the integrity of the pupil region in the current frame of eye image is lower than the preset integrity, the pupil region in the current frame of eye image can be completed in real time by using the pupil region with integrity reaching the preset integrity in the multi-frame of eye image that has been determined before the current time.
In some optional implementation manners of this embodiment, the process of completing the pupil region whose integrity is lower than the preset integrity may also be implemented after the eye movement tracking process, that is, for each frame of eye image acquired in the eye movement tracking process, if the integrity of the pupil region in a certain frame of eye image is lower than the preset integrity, the pupil region in a certain frame of eye image may be completed by using the pupil region whose integrity reaches the preset integrity in a plurality of frames of eye images already determined in the whole eye movement tracking process.
In some optional implementation manners of this embodiment, after determining whether the integrity of the pupil region in each frame of eye image reaches the preset integrity, a corresponding mark may be made on the eye image based on the integrity of the pupil region, for example, if the integrity of the pupil region in a certain frame of eye image reaches the preset integrity, that is, the complete pupil region, the certain frame of eye image is determined as the second target eye image, and may be marked as a "normal pupil region" or other forms of identifiers (e.g., numbers, characters, etc.) indicating the integrity of the pupil region, and if the integrity of the pupil region in a certain frame of eye image is lower than the preset integrity, that is, the incomplete pupil region, the certain frame of eye image is marked as a "to-be-compensated pupil region" or other forms of identifiers (e.g., numbers, characters, etc.) indicating the incomplete pupil region, and the identified eye image is stored in the image library, so that the complete pupil region may be directly and accurately read from the image library in the process of supplementing the pupil region to perform a matching process, so as to improve the overall efficiency and the accuracy of the multiple pupils.
In some optional implementation manners of this embodiment, for a complete pupil region itself or a complete pupil region after completion, a circular centroid algorithm may be used to determine the pupil center point.
In some optional implementations of this embodiment, the eyeball movement condition of the target object may be determined based on the relative position conditions of the pupil center point and the corneal reflection point in the multiple frames of the first target eye image, or the eyeball movement condition of the target object may be determined based on the relative position conditions of the pupil center point and the corneal reflection point in the multiple frames of the first target eye image and the second target eye image.
In some optional implementation manners of this embodiment, the eye tracking method may be applied to application scenarios of neurocognitive science, psychology, marketing science, user research, eye movement research under a reading test, and the like based on visual information, so as to provide accurate and effective eye movement data for eye movement research.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the eye-tracking method according to the present embodiment. In the application scenario of fig. 3, an executing subject 301 acquires a plurality of frames of eye images 302 of a target object, wherein the eye images include light reflection information. The execution main body 301 performs completion processing on a pupil area in the eye image in response to judging that the integrity of the pupil area in the eye image is lower than a preset integrity, so as to obtain a first target eye image 303. The execution subject 301 determines a pupil center point 304 of the first target eye image. The execution body 301 determines a corneal reflection point 305 from the light reflection information of the first target eye image. The execution subject 301 determines an eye movement condition 306 of the target object according to the pupil center point 304 and the corneal reflection point 305.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present application provides an embodiment of an eye tracking apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which may include the same or corresponding features or effects as the embodiment of the method shown in fig. 2, in addition to the features described below. The device can be applied to various electronic equipment.
As shown in fig. 4, the eye tracking apparatus 400 of the present embodiment includes: an image acquisition unit 401, an image complementing unit 402, a pupil center determination unit 403, a corneal reflection point determination unit 404, and an eye movement tracking unit 405. The image acquiring unit 401 is configured to acquire a plurality of frames of eye images of a target object, wherein the eye images include light reflection information; an image complementing unit 402 configured to, in response to a determination that the integrity of the pupil region in the eye image is lower than a preset integrity, perform a complementing process on the pupil region in the eye image, resulting in a first target eye image; a pupil center determining unit 403 configured to determine a pupil center point of the first target eye image; a corneal reflection point determining unit 404 configured to determine corneal reflection points from light reflection information of the first target eye image; an eye tracking unit 405 configured to determine an eye movement of the target object according to the pupil center point and the corneal reflection point.
In this embodiment, specific processes of the image obtaining unit 401, the image complementing unit 402, the pupil center determining unit 403, the corneal reflection point determining unit 404, and the eye tracking unit 405 of the eye tracking apparatus 400 and technical effects thereof may refer to related descriptions of step 201, step 202, step 203, step 204, and step 205 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the image completion unit includes:
an image determining subunit configured to determine, in the plurality of frames of eye images, at least one frame of eye image in which the integrity of the exit pupil region reaches the preset integrity, resulting in at least one frame of second target eye image;
a complementing subunit configured to perform complementing processing on the pupil area in the eye image based on the at least one frame of second target eye image, resulting in the first target eye image.
In some optional implementations of this embodiment, a completion subunit configured to determine a grayscale value of the eye image; determining a region to be compensated of the pupil region in the eye image according to the gray value; and performing completion processing on the region to be completed based on the at least one frame of second target eye image to obtain the first target eye image.
In some optional implementations of this embodiment, the complementing subunit is configured to determine a first matching content of the eye image, determine a second matching content of the at least one frame of second target eye image; determining a matching error according to the first matching content of the eye image and the second matching content of the at least one frame of second target eye image, and determining the second target eye image with the minimum matching error as a reference image; and performing complementation processing on the region to be complemented according to the reference image to obtain the first target eye image.
In some optional implementations of the embodiment, the completion subunit is configured to determine, in the eye image, a first reference region including the region to be completed, and determine, as the first matching content of the eye image, pixel points in the first reference region except for the region to be completed; in a second target eye image, determining a region corresponding to the region to be compensated as a target region, determining a second reference region including the target region, and determining pixel points except the target region in the second reference region as second matching content of the second target eye image.
In some optional implementation manners of this embodiment, the complementing subunit is configured to perform complementing processing on the region to be complemented by using the target region in the reference image, so as to obtain the first target eye image.
In some optional implementations of the present embodiment, the completion subunit is configured to determine whether the integrity of the pupil region in the eye image after completion processing reaches the preset integrity; and if not, performing completion processing on the pupil area in the eye image again to obtain the first target eye image.
In some optional implementations of this embodiment, the image completion unit includes:
a determining subunit configured to determine a circularity of a pupil region in the eye image; and judging that the roundness of the pupil area is lower than a preset roundness threshold value, and determining that the integrity of the pupil area is lower than the preset integrity.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a non-volatile computer-readable storage medium, and a computer program product.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 501 is taken as an example.
The memory 502, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the eye tracking method in the embodiment of the present application (for example, the image acquisition unit 401, the image completion unit 402, the pupil center determination unit 403, the corneal reflection point determination unit 404, and the eye tracking unit 405 shown in fig. 4). The processor 501 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 502, that is, implements the eye tracking method in the above-described method embodiments.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the eye tracking method, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 502 may optionally include memory located remotely from the processor 501, which may be connected to the eye tracking method electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the eye tracking method may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the eye tracking method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an image acquisition unit, an image completion unit, a pupil center determination unit, a corneal reflection point determination unit, and an eye movement tracking unit. Where the names of the cells do not in some cases constitute a limitation of the cell itself, for example, the eye tracking cell may also be described as a "cell determining eye movement".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carrying one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a multi-frame eye image of a target object, wherein the eye image comprises light reflection information;
in response to the judgment that the integrity of the pupil area in the eye image is lower than the preset integrity, performing completion processing on the pupil area in the eye image to obtain a first target eye image;
determining a pupil center point of the first target eye image;
determining a corneal reflection point according to light reflection information of the first target eye image;
and determining the eyeball motion condition of the target object according to the pupil center point and the corneal reflection point.
As another aspect, the present application also provides a computer program product, which may be included in the apparatus described in the above embodiments; or may be separate and not assembled into the device. The computer program product carries one or more programs that, when executed by the apparatus, cause the apparatus to: acquiring a multi-frame eye image of a target object, wherein the eye image comprises light reflection information;
in response to the judgment that the integrity of the pupil area in the eye image is lower than the preset integrity, performing completion processing on the pupil area in the eye image to obtain a first target eye image;
determining a pupil center point of the first target eye image;
determining a corneal reflection point according to the light reflection information of the first target eye image;
and determining the eyeball motion condition of the target object according to the pupil center point and the corneal reflection point.
The foregoing description is only exemplary of the preferred embodiments of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (11)
1. An eye tracking method, the method comprising:
acquiring a multi-frame eye image of a target object, wherein the eye image comprises light reflection information;
in response to the judgment that the integrity of the pupil area in the eye image is lower than the preset integrity, performing completion processing on the pupil area in the eye image to obtain a first target eye image;
determining a pupil center point of the first target eye image;
determining a corneal reflection point according to light reflection information of the first target eye image;
determining the eyeball motion condition of the target object according to the pupil center point and the corneal reflection point;
performing completion processing on the pupil area in the eye image to obtain the first target eye image, including:
determining at least one frame of eye image of which the integrity of an exit pupil area reaches the preset integrity in the multiple frames of eye images to obtain at least one frame of second target eye image;
and performing completion processing on the pupil area in the eye image based on the at least one frame of second target eye image to obtain the first target eye image.
2. The method of claim 1, wherein complementing the pupil region in the eye image based on the at least one frame of the second target eye image, resulting in the first target eye image, comprises:
determining a gray value of the eye image;
determining a region to be compensated of the pupil region in the eye image according to the gray value;
and performing completion processing on the region to be completed based on the at least one frame of second target eye image to obtain the first target eye image.
3. The method according to claim 2, wherein performing a completion process on the region to be completed based on the at least one frame of second target eye image to obtain the first target eye image comprises:
determining a first matching content of the eye image, determining a second matching content of the at least one frame of second target eye image;
determining a matching error according to the first matching content of the eye image and the second matching content of the at least one frame of second target eye image, and determining the second target eye image with the minimum matching error as a reference image;
and performing complementation processing on the region to be complemented according to the reference image to obtain the first target eye image.
4. The method of claim 3, wherein determining a first match content for the eye image, determining a second match content for the at least one frame of a second target eye image, comprises:
determining a first reference region containing the region to be compensated in the eye image, and determining pixel points in the first reference region except the region to be compensated as first matching content of the eye image;
in a second target eye image, determining a region corresponding to the region to be compensated as a target region, determining a second reference region including the target region, and determining pixel points except the target region in the second reference region as second matching content of the second target eye image.
5. The method according to claim 4, wherein the complementing the region to be complemented according to the reference image to obtain the first target eye image comprises:
and performing complement processing on the region to be complemented by adopting the target region in the reference image to obtain the first target eye image.
6. The method according to claim 3, wherein the complementing processing is performed on the region to be complemented according to the reference image, so as to obtain the first target eye image, and the method comprises:
and after the completion processing is carried out on the region to be completed according to the reference image, fusing the boundary region of the region to be completed to obtain the first target eye image.
7. The method of any one of claims 1 to 6, wherein performing a completion process on a pupil region in the eye image to obtain the first target eye image comprises:
determining whether the integrity of a pupil region in the eye image subjected to the completion processing reaches the preset integrity;
and if not, performing completion processing on the pupil area in the eye image again to obtain the first target eye image.
8. The method of any of claims 1 to 6, wherein determining that the integrity of the pupil region in the eye image is below the preset integrity comprises:
determining a circularity of a pupil region in the eye image;
and judging that the roundness of the pupil area is lower than a preset roundness threshold value, and determining that the integrity of the pupil area is lower than the preset integrity.
9. An eye tracking device, the device comprising:
the device comprises an image acquisition unit, a processing unit and a control unit, wherein the image acquisition unit is used for acquiring a plurality of frames of eye images of a target object, and the eye images comprise light reflection information;
the image complementing unit is used for performing complementing processing on the pupil area in the eye image in response to the fact that the completeness of the pupil area in the eye image is lower than a preset completeness, and a first target eye image is obtained;
a pupil center determining unit configured to determine a pupil center point of the first target eye image;
a corneal reflection point determining unit for determining a corneal reflection point from light reflection information of the first target eye image;
the eye movement tracking unit is used for determining the eyeball movement condition of the target object according to the pupil center point and the cornea reflection point;
an image completion unit comprising:
the image determining subunit is configured to determine, in the multiple frames of eye images, at least one frame of eye image in which the integrity of the exit pupil region reaches the preset integrity, so as to obtain at least one frame of second target eye image;
and the completion subunit is configured to perform completion processing on the pupil area in the eye image based on the at least one frame of second target eye image, so as to obtain the first target eye image.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.
11. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211442435.2A CN115624315B (en) | 2022-11-18 | 2022-11-18 | Eye movement tracking method and device, electronic equipment, computer storage medium and product |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211442435.2A CN115624315B (en) | 2022-11-18 | 2022-11-18 | Eye movement tracking method and device, electronic equipment, computer storage medium and product |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115624315A CN115624315A (en) | 2023-01-20 |
| CN115624315B true CN115624315B (en) | 2023-03-14 |
Family
ID=84910727
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211442435.2A Active CN115624315B (en) | 2022-11-18 | 2022-11-18 | Eye movement tracking method and device, electronic equipment, computer storage medium and product |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115624315B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116974370B (en) * | 2023-07-18 | 2024-04-16 | 深圳市本顿科技有限公司 | Anti-addiction child learning tablet computer control method and system |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| AUPQ896000A0 (en) * | 2000-07-24 | 2000-08-17 | Seeing Machines Pty Ltd | Facial image processing system |
| EP2499961B1 (en) * | 2011-03-18 | 2016-09-21 | SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH | Spectacle device with an adjustable field of view and method |
| DE102015000248A1 (en) * | 2015-01-13 | 2016-07-14 | Karlsruher Institut für Technologie | Sensor and sensor system for detecting an eyeball orientation and an accommodating contact lens with such a sensor or sensor system |
| CN105843397A (en) * | 2016-04-12 | 2016-08-10 | 公安部上海消防研究所 | Virtual reality interactive system based on pupil tracking technology |
| EP3339900B1 (en) * | 2016-12-22 | 2020-08-12 | Safran Vectronix AG | Observation device having an eye-controlled laser rangefinder |
| JP6832318B2 (en) * | 2018-10-01 | 2021-02-24 | アイウェイ ビジョン エルティーディー. | Eye projection system |
-
2022
- 2022-11-18 CN CN202211442435.2A patent/CN115624315B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN115624315A (en) | 2023-01-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230136669A1 (en) | Event camera-based gaze tracking using neural networks | |
| US10739849B2 (en) | Selective peripheral vision filtering in a foveated rendering system | |
| KR102410328B1 (en) | Method and apparatus for training face fusion model and electronic device | |
| CN107111753B (en) | Gaze detection offset for gaze tracking models | |
| CN108136258B (en) | Method and system for adjusting image frame based on tracking eye movement and head-mounted device | |
| US10192528B2 (en) | Real-time user adaptive foveated rendering | |
| CN110502099B (en) | Method for reliably detecting a correlation between gaze and stimulus | |
| US11972042B2 (en) | Variable intensity distributions for gaze detection assembly | |
| US20150098620A1 (en) | Position Estimation | |
| KR20210062000A (en) | Virtual try-on system and method for glasses | |
| US11487358B1 (en) | Display apparatuses and methods for calibration of gaze-tracking | |
| US10254831B2 (en) | System and method for detecting a gaze of a viewer | |
| US11402901B2 (en) | Detecting eye measurements | |
| EP3092616A1 (en) | Mapping glints to light sources | |
| EP3757655B1 (en) | Method and system for 3d cornea position estimation | |
| US12008711B2 (en) | Determining display gazability and placement of virtual try-on glasses using optometric measurements | |
| CN110610768B (en) | Eye use behavior monitoring method and server | |
| Toivanen et al. | Probabilistic approach to robust wearable gaze tracking | |
| CN115624315B (en) | Eye movement tracking method and device, electronic equipment, computer storage medium and product | |
| CN111767110B (en) | Image processing method, device, system, electronic equipment and storage medium | |
| CN112200169B (en) | Method, apparatus, device and storage medium for training a model | |
| CN115857678B (en) | Eye movement testing method, device, equipment and storage medium | |
| CN115955547B (en) | Camera adjustment method and system for XR glasses | |
| US20250378580A1 (en) | Gaze online learning | |
| CN115153517B (en) | Testing method, device, equipment and storage medium for timing, standing and walking test |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |