CN113378790B - Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium - Google Patents
Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium Download PDFInfo
- Publication number
- CN113378790B CN113378790B CN202110772995.3A CN202110772995A CN113378790B CN 113378790 B CN113378790 B CN 113378790B CN 202110772995 A CN202110772995 A CN 202110772995A CN 113378790 B CN113378790 B CN 113378790B
- Authority
- CN
- China
- Prior art keywords
- target
- human eye
- edge curve
- image
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure provides a viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium. The viewpoint positioning method comprises the following steps: acquiring a target human eye image; performing edge detection on the target human eye image, and determining an iris edge curve of the target human eye image; truncating the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a circular boundary; respectively determining the circle center and the radius of each edge curve segment; and determining the pupil center of the human eye in the target human eye image according to the circle centers and the radiuses of the edge curve segments. The technical scheme provided by the embodiment of the disclosure can simply and accurately complete the positioning of the pupil center of human eyes.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a viewpoint positioning method and apparatus, an electronic device, and a computer readable storage medium.
Background
The viewpoint positioning is mainly divided into two implementation means based on hardware and software, wherein the viewpoint tracking based on hardware requires a user to wear a special helmet, special glasses or fix a camera on the top of the head by using a special bracket, and viewpoint data is obtained by using a viewpoint sensor, and the viewpoint positioning method has higher recognition precision, but limits the movement of the user and brings larger interference. In recent years, viewpoint tracking based on software is becoming a research hot spot, and the method mainly uses a camera installed in front of the face of a user to collect a human face video image sequence and uses an image processing algorithm to locate human eyes.
In view tracking of software, a Hough (Hough) transform is generally used to determine the pupil position of a human eye for view positioning. However, the traditional Hough transformation method is huge in calculation amount and low in searching accuracy.
It should be noted that the information disclosed in the foregoing background section is only for enhancing understanding of the background of the present disclosure.
Disclosure of Invention
The present disclosure is directed to a viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium, which can improve the recognition accuracy of recognizing the pupil center in human eyes, and can also improve the recognition efficiency of recognizing the pupil center.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
The embodiment of the disclosure provides a viewpoint positioning method, which comprises the following steps: acquiring a target human eye image; performing edge detection on the target human eye image, and determining an iris edge curve of the target human eye image; truncating the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a circular boundary; respectively determining the circle center and the radius of each edge curve segment; and determining the pupil center of the human eye in the target human eye image according to the circle centers and the radiuses of the edge curve segments.
In some embodiments, the plurality of edge curve segments includes a target edge curve segment; wherein, confirm centre of a circle and radius of each marginal curve section respectively, include: selecting at least three points from the target edge curve segment, wherein the target edge curve segment is in an image space; converting at least three points on the target edge curve segment from the image space to a parameter space to generate at least three cones; determining a target point in the parameter space according to the at least three cones; converting the target point from the parameter space to the image space to determine a target circle in the image space according to the target point; and determining the corresponding circle center and radius of the target edge curve segment according to the target circle.
In some embodiments, selecting at least three points from the target edge curve segment comprises: n equal division is carried out on the target edge curve segment so as to obtain N equal division line segments, wherein N is an integer greater than or equal to 3; and selecting one point from each bisection line segment respectively to generate at least three points on the target edge curve segment.
In some embodiments, determining the pupil center of the human eye in the target human eye image according to the center and radius of each edge curve segment includes: determining the center distances between every two of the edge curve sections; taking an edge curve segment with the center distance smaller than a first threshold value as a first candidate edge curve segment; determining a second candidate edge curve segment with a radius smaller than a second threshold value from the first candidate edge curve segment; performing average value processing on the circle centers of the second candidate edge curve segments to determine target circle centers; and taking the target circle center as the pupil center of the human eye in the target human eye image.
In some embodiments, the target human eye image comprises a red channel image; performing edge detection on the target human eye image to determine an iris edge curve of the target human eye image, wherein the method comprises the following steps of; performing edge detection on the target human eye image to obtain an eye line drawing, wherein the eye line drawing comprises an iris edge curve and noise; determining a human eye region image in the target human eye image in the red channel image; extracting and expanding human eye edges of the human eye region image to obtain an iris edge curve expansion diagram; and screening the eye line drawing through the iris edge curve expansion diagram to obtain the target edge of the target human eye image.
In some embodiments, edge detection is performed on the target human eye image to obtain an eye line drawing, including: determining gradient strength of all pixels in the target human eye image in a target direction; determining local maximum pixels of which the pixel values are local maxima in the target human eye image according to gradient intensities of the pixels in the target direction; determining local non-maximum pixels in the target human eye image according to the maximum pixels; setting a pixel value of the local non-maximum pixel as a target value; determining non-edge pixels with gradient strength in the target direction smaller than a third threshold value from the local maximum value pixels; and setting the pixel value of the non-edge pixel as a target value to perform edge detection on the target human eye image, so as to obtain the eye line drawing.
In some embodiments, performing an eye edge extraction and dilation process on the eye region image to obtain an iris edge curve dilation map comprises: performing binarization processing on the human eye region image to obtain a binarized image; performing inverse and open operation processing on the binarized image to obtain an iris foreground image; and carrying out edge extraction and expansion processing on the iris foreground image to obtain the human eye edge expansion map.
The embodiment of the disclosure provides a viewpoint positioning device, which comprises: the device comprises a human eye image acquisition module, an iris edge curve determination module, an edge curve segment determination module, a radius determination module and a pupil center determination module.
The human eye image acquisition module is used for acquiring a target human eye image; the iris edge curve determining module can be used for performing edge detection on the target human eye image and determining an iris edge curve of the target human eye image; the edge curve segment determination module may be configured to truncate the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a circular boundary; the radius determining module can be used for determining the circle center and the radius of each edge curve segment respectively; the pupil center determining module may be configured to determine a pupil center of a human eye in the target human eye image according to a circle center and a radius of each edge curve segment.
The embodiment of the disclosure provides an electronic device, which comprises: one or more processors; and a storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement any of the above-described viewpoint locating methods.
The presently disclosed embodiments provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a viewpoint positioning method as set forth in any of the above.
Embodiments of the present disclosure propose a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the above-described viewpoint positioning method.
According to the viewpoint positioning method, the viewpoint positioning device, the electronic equipment and the computer readable storage medium, on one hand, the pupil center can be determined by determining the iris edge curve, and the recognition accuracy of the pupil center is improved; on the other hand, by cutting off the iris edge curve, the pupil center recognition accuracy is improved, and meanwhile, the pupil center recognition efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which a viewpoint positioning method or viewpoint positioning apparatus of an embodiment of the present disclosure may be applied.
Fig. 2 is a flowchart illustrating a method of viewpoint positioning according to an exemplary embodiment.
Fig. 3 is a diagram illustrating an iris edge curve determination method according to an exemplary embodiment.
Fig. 4 is a diagram illustrating an iris edge curve determination according to an exemplary embodiment.
Fig. 5 is a diagram illustrating a method of center determination according to an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating a spatial transformation according to an exemplary embodiment.
Fig. 7 is a diagram illustrating a pupil center determination method according to an example embodiment.
Fig. 8 is a comparison of pupil positioning results, according to an exemplary embodiment.
Fig. 9 is a diagram illustrating a pupil positioning result according to an exemplary embodiment.
FIG. 10 is a schematic diagram illustrating an error result according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating a viewpoint positioning device according to an exemplary embodiment.
Fig. 12 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the present specification, the terms "a," "an," "the," "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and do not limit the number of their objects.
The following describes example embodiments of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which a viewpoint positioning method or viewpoint positioning apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, wearable devices, virtual reality devices, smart homes, etc.
The server 105 may be a server providing various services, such as a background management server providing support for devices operated by users with the terminal devices 101, 102, 103. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server or the like for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the disclosure is not limited thereto.
The server 105 may, for example, acquire a target human eye image; the server 105 may, for example, perform edge detection on the target human eye image to determine an iris edge curve of the target human eye image; server 105 may, for example, truncate the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a boundary of a circle; server 105 may, for example, determine the center and radius of each edge curve segment separately; server 105 may determine the pupil center of the human eye in the target human eye image, for example, based on the center and radius of each edge curve segment.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative, and that the server 105 may be a server of one entity, or may be composed of a plurality of servers, and may have any number of terminal devices, networks and servers according to actual needs.
Fig. 2 is a flowchart illustrating a method of viewpoint positioning according to an exemplary embodiment. The method provided by the embodiments of the present disclosure may be performed by any electronic device having computing processing capability, for example, the method may be performed by a server or a terminal device in the embodiment of fig. 1, or may be performed by both the server and the terminal device, and in the following embodiments, the server is taken as an example to illustrate an execution subject, but the present disclosure is not limited thereto.
Referring to fig. 2, the viewpoint positioning method provided by the embodiment of the present disclosure may include the following steps.
In step S202, a target human eye image is acquired.
In some embodiments, the target human eye image may refer to an image including the human eye to be identified as shown in fig. 4 (a). It will be appreciated that in the target human eye image, the less background information other than the human eye is, the better.
Step S204, edge detection is carried out on the target human eye image, and an iris edge curve of the target human eye image is determined.
The iris edge curve may refer to an edge curve of an iris in a human eye to be recognized in a target human eye image as shown in (f) of fig. 4.
Step S206, cutting the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a circular boundary.
In practice, the iris edge curve segment may be discontinuous, may be composed of a plurality of curve segments, and thus may be truncated into a plurality of discontinuous edge curve segments.
In other embodiments, the iris edge curve segment may also be truncated according to a distance threshold, for example, the iris variable source curve segment may be truncated into segments every 100 pixels.
The present disclosure is not limited to a method of truncating an iris edge curve segment into a plurality of edge curve segments.
Step S207, determining the circle center and the radius of each edge curve segment respectively.
Step S210, determining the pupil center of the human eye in the target human eye image according to the circle centers and the radiuses of the edge curve segments.
According to the technical scheme provided by the embodiment of the disclosure, on one hand, the pupil center can be determined by determining the iris edge curve, so that the recognition accuracy of the pupil center is improved; on the other hand, by cutting off the iris edge curve, the pupil center recognition accuracy is improved, and meanwhile, the pupil center recognition efficiency is improved.
The technical scheme provided by the disclosure can well locate the pupil center, and is an efficient and accurate moving target detection technology. The technical scheme provided by the disclosure can still well locate the pupil center under different illumination conditions and when the head deflects for a certain angle, and has certain robustness.
With the development and appearance of artificial intelligence, robots are attracting attention, and a robot vision system has important research value as a way for the robots to acquire external information most intuitively. The research on the robot vision system will be applied to many fields, in which the video monitoring system, as an important direction of the computer vision field, has been applied to the fields of traffic management, sports competition, video monitoring, etc. The method provided by the disclosure can perform face detection on the acquired video stream images, then perform face feature point positioning and calculation on the detected face areas to obtain interested eye areas, perform image preprocessing on the eye areas to perform edge extraction and screening, and finally perform pupil identification and center positioning on the extracted eye areas by adopting improved Hough transformation. The whole system has small calculated amount, high real-time performance and wide application prospect. For example, the method can be applied to the following scenes:
1. The method is used in man-machine interaction and other systems, can well liberate both hands of people, and can finish a plurality of operation processes which cannot be realized under certain conditions, such as: the non-contact remote control and the control targets and the like when both hands are occupied, and can effectively help the disabled to realize the use of the computer.
2. And (5) automatically assisting driving and detecting fatigue driving of a driver.
3. Virtual space reality.
Fig. 3 is a diagram illustrating an iris edge curve determination method according to an exemplary embodiment.
In some embodiments, the target human eye image comprises a red channel image.
Referring to fig. 3, the iris edge curve determination method may include the following steps.
Step S302, performing edge detection on the target human eye image to obtain an eye line drawing, where the eye line drawing includes a non-iris edge curve and noise.
In some embodiments, a canny edge detection may be performed on the target human eye image to obtain an eye line drawing as in the b image in fig. 4.
In some embodiments, the following method may be used to perform edge detection on the target human eye image to obtain the eye line drawing.
Determining the gradient intensity (which may be the absolute value of the gradient intensity in practice) in the target direction for all pixels in the target human eye image; determining local maximum pixels of which the pixel values are local maxima in the target human eye image according to gradient intensities of the pixels in the target direction; determining local non-maximum pixels in the target eye image according to the local maximum pixels; setting a pixel value of the local non-maximum pixel as a target value; determining non-edge pixels with gradient strength in the target direction smaller than a third threshold value from the local maximum value pixels; and setting the pixel value of the non-edge pixel as a target value to perform edge detection on the target human eye image, so as to obtain the eye line drawing.
The target direction may refer to a plurality of specified directions such as up, down, left, right, and the like.
Wherein determining a local maximum pixel in the target human eye image where the pixel value is a local maximum may include: a target pixel is determined in the target human eye image (in this embodiment, only the target pixel is taken as an example), and the gradient intensity of the target pixel is compared with the gradient intensities of other pixels in all directions in a local area, so as to determine whether the target pixel is a local maximum in the local area.
For example, assuming that the target direction includes four directions of up, down, left and right, the gradient intensity of the target pixel in the four directions of up, down, left and right may be compared with all gradient intensities of other pixels within a local area range of the target pixel (for example, a local area formed by expanding 5 pixels outward in the four directions of up, down, left and right of the target pixel), and if the gradient intensity of the target pixel in one direction is greater than the gradient intensity of other pixels, it is determined that the pixel value corresponding to the target pixel is a local maximum pixel.
In still other embodiments, determining that the pixel value is a local maximum pixel of a local maximum in the target human eye image may include: in the target human eye image, a target pixel is determined (in this embodiment, only the target pixel is taken as an example), and if the pixel value of the target pixel is determined to be maximum in a local area (for example, a local area formed by extending 5 pixels outwards in four directions of up, down, left and right of the target pixel), the pixel value corresponding to the target pixel is determined to be a local maximum.
In some embodiments, other pixels in the target human eye image than the local maximum pixel value are taken as local non-maximum pixels.
In some embodiments, the target value may be 0.
In some embodiments, the true and potential edges may also be determined in the target human eye image by dual threshold detection, and then the edges may be isolated by suppression to obtain an eye line map as shown in fig. 4 (b).
As shown in fig. 4 (b), the eye line drawing includes not only the iris edge but also a lot of edge information and noise information of non-iris edges.
Step S304, determining a human eye region image in the target human eye image in the red channel image.
According to the color distribution of the iris pupil area, the iris is significantly different in red channel than the color distribution of the skin and sclera.
And step S306, performing human eye edge extraction and expansion processing on the human eye area image to obtain an iris edge curve expansion chart.
In some embodiments, an edge extraction process may be performed on the eye region image to obtain an iris edge curve dilation map as shown in fig. 4 (e). The iris edge curve dilation map may be an image obtained by dilation on the edge of the iris.
In some embodiments, the eye edge extraction and dilation process may be performed on the eye region image using the following method: performing binarization processing on the human eye region image to obtain a binarized image as shown in a diagram (c) in fig. 4; performing inverse and open operation processing on the binarized image to obtain an iris foreground image (shown as a (d) diagram in fig. 4); and sequentially performing edge extraction and edge expansion processing on the iris foreground image to obtain an iris edge curve expansion graph (shown as a graph (e) in fig. 4).
And step S308, screening the eye line drawing through the iris edge curve expansion diagram to obtain the target edge of the target human eye image.
In some embodiments, the iris edge curve expansion map and the eye line map may be subjected to a "superimposed intersection" process, that is, pixels in the eye line map, where the pixel values at the positions overlapping the iris edge curve expansion map are all 1, are set to 1, and pixels in the eye line map, where the pixel values at the positions overlapping the iris edge curve expansion map are not all 1, are set to 0, so as to obtain the target edge of the target human eye image (as shown in (f) in fig. 4).
According to the technical scheme provided by the embodiment of the disclosure, the facial feature points are accurately positioned in the detected face area, the accurate position of the human eye area is calculated according to the coordinates of the extracted feature point pixels, then the edge of the human eye area is detected to obtain an eye curve graph, and then the detection result is screened according to the color distribution priori of the human eye iris image to obtain the target edge of the target human eye image. According to the scheme, on one hand, the screening of the eye line drawings is realized through the iris edge curve expansion diagram, and the denoising treatment of the non-iris edge curve in the eye line drawings is realized; on the other hand, under the red channel image, the iris is accurately distinguished from the skin and the sclera, and the iris edge curve expansion diagram is simply and accurately determined, so that the iris edge curve is ensured to be necessarily included in the expansion curve in the iris edge curve expansion diagram, and further the denoising treatment of the eye line drawing can be accurately realized through the iris edge curve expansion diagram.
Fig. 5 is a diagram illustrating a method of center determination according to an exemplary embodiment.
In some embodiments, the plurality of edge curve segments includes a target edge curve segment, and embodiments of the present disclosure will be described with reference to the target edge curve segment as to how the center and radius of the edge curve segment are determined, but the present disclosure is not limited thereto.
In some embodiments, the circle center and radius corresponding to the target edge curve segment can be determined through hough transformation, but in hough transformation, a large amount of calculation is required, and accuracy and efficiency cannot be guaranteed.
The embodiment provides a circle center determining method for determining the circle center of a target edge curve segment.
Referring to fig. 5, the above-described center determining method may include the following steps.
Step S502, selecting at least three points from the target edge curve segment, wherein the target edge curve segment is in an image space.
Where image space refers to the space in which points on the target edge curve segment are located.
In some embodiments, three points may be randomly selected from the target edge curve segment, or the target edge curve segment may be equally divided by N to obtain N equal-divided line segments, where N is an integer greater than or equal to 3, and then one point is selected from each equal-divided line segment to generate at least three points on the target edge curve segment.
Step S504 transforms at least three points on the target edge curve segment from the image space to a parameter space to generate at least three cones.
In some embodiments, the circles on the target edge curve segment may be represented in image space by equation (1):
(xi-ai)2+(yi-bi)2=r2 (1)
As shown in fig. 6, any point (x 1, y 1) on the target edge curve segment corresponds to a cone (a 1i-x 1) 2+(b1i-y1)2=ri2 in the parameter space (shown in fig. 6 (1)) constructed from parameters a, b, r, and a circle in the image space corresponds to the point where the conic cones intersect.
Step S506, determining a target point in the parameter space according to the at least three cones.
As shown in fig. 6, one point at which the above-described at least three cones intersect may be taken as the target point, and a point at which there are a maximum of cones intersecting may be taken as the target point, which is not limited by the present disclosure.
Step S508, converting the target point from the parameter space to the image space, so as to determine a target circle in the image space according to the target point.
In some embodiments, a point (a 1i, b1i, ri) in the parameter space may correspond to a circle in the image space, so that after the determination of the target point in the parameter space, a target circle in the image space corresponding thereto may then be determined.
And step S510, determining the corresponding circle center and radius of the target edge curve segment according to the target circle.
In some embodiments, the target circle may be a circle in which the target edge curve segment is located, and the center and radius of the target circle may be the center and radius of the target edge curve segment.
According to the technical scheme provided by the embodiment of the disclosure, the circle center and the radius of the circle where the target edge curve is located can be determined according to at least three points on the target edge curve section, so that the accuracy of determining the target circle can be improved, and the efficiency of determining the target circle can be improved.
Fig. 7 is a diagram illustrating a pupil center determination method according to an example embodiment.
Referring to fig. 7, the pupil center determination method described above may include the following steps.
And step S702, determining the center distances between every two of the edge curve segments.
In some embodiments, each edge curve segment may correspond to a center and a radius.
In some embodiments, the center distance between edge curve segments may be calculated.
In step S704, an edge curve segment with a center distance smaller than a first threshold is used as a first candidate edge curve segment.
Step S706, determining a second candidate edge curve segment with a radius smaller than a second threshold value from the first candidate edge curve segments.
Wherein the second threshold may be a threshold set by a person skilled in the art according to the radius of the human eye.
Step S708, performing a mean value obtaining process on the circle centers of the second candidate edge curve segments to determine a target circle center.
Step S710, taking the target center of circle as the pupil center of the human eye in the target human eye image.
In other embodiments, the edge curve segment with the radius distance greater than or equal to the second threshold value may be first removed to obtain a third candidate edge curve segment, then the third candidate edge curve segment is clustered, then the third candidate edge curve segment in the class with the largest number of clusters is obtained as a fourth candidate edge curve segment, and finally the center of the fourth candidate edge curve segment is averaged to determine the target center.
In order to compare the detection efficiency and the accuracy of the conventional Hough transformation with the technical scheme provided by the embodiment, a set of comparison experiments are designed in the disclosure. The pupil center in the same image is detected by the two algorithms, and fig. 8 (a) is a detection result using a conventional Hough transform, and fig. 8 (b) is implemented using an embodiment provided in the present disclosure.
Comparing the results of the two algorithms to obtain the data shown in Table 1
TABLE 1
The data in the table can show that the circle calculated by the traditional Hough transformation method has larger deviation compared with the size of a real circle, the detection process consumes longer time, and the detection precision and the detection time of the method provided by the scheme are both greatly improved.
To verify the validity of the algorithm, a test is performed on BioID (a face database) dataset. The BioID database has 1521 gray-scale images in total, the resolution of the images is 384 x 256 pixels, and the images are shot by 23 different testers under different conditions of illumination, head posture and facial expression, so the data set is suitable for checking the accuracy and the robustness of pupil positioning. Fig. 9 is a result of locating BioID portions of the dataset (the marked location of the point of view of the human eye can be viewed after the picture is enlarged).
The accuracy of positioning is measured by using a normalization error, and the normalization error formula is shown as the formula (2):
Wherein x l is the difference between the center of the pupil of the left eye and the true value, x r is the difference between the center of the pupil of the right eye and the true value, and x is the true interpupillary distance. A normalized error accuracy plot as shown in fig. 10 was obtained.
When e is less than or equal to 0.05, the pupil center positioned by the expression algorithm can be regarded as a real pupil center position; when e is less than or equal to 0.1, the pupil center positioned by the algorithm is positioned in the iris range; and when e is less than or equal to 0.25, indicating that the pupil center positioned by the algorithm is in the range of the human eye area. Therefore, the illustrated result shows that the accuracy of the e-0.05 algorithm positioning is 96.5% or less, and the accuracy of the e-0.1 algorithm positioning is 99.6% or less, so that the pupil positioning algorithm provided by the disclosure can be verified to have higher accuracy.
Fig. 11 is a block diagram illustrating a viewpoint positioning device according to an exemplary embodiment. Referring to fig. 11, a viewpoint positioning apparatus 1100 provided by an embodiment of the present disclosure may include: a human eye image acquisition module 1101, an iris edge curve determination module 1102, an edge curve segment determination module 1103, a radius determination module 1104, and a pupil center determination module 1105.
Wherein, the human eye image acquisition module 1101 may be configured to acquire a target human eye image; the iris edge curve determining module 1102 may be configured to perform edge detection on the target human eye image, and determine an iris edge curve of the target human eye image; the edge curve segment determination module 1103 may be configured to truncate the iris edge curve into a plurality of edge curve segments, where each edge curve segment is a circular boundary; the radius determination module 1104 may be configured to determine the center and radius of each edge curve segment, respectively; the pupil center determining module 1105 may be configured to determine a pupil center of a human eye in the target human eye image according to a center and a radius of each edge curve segment.
In some embodiments, the plurality of edge curve segments includes a target edge curve segment; wherein the radius determination module 1104 may include: the device comprises a three-point interception sub-module, a cone determination sub-module, a target point determination sub-module, a target circle determination sub-module and a circle center determination sub-module.
The three-point intercepting sub-module can be used for selecting at least three points from the target edge curve segment, wherein the target edge curve segment is in an image space; the cone determination submodule may be used to convert at least three points on the target edge curve segment from the image space to a parameter space to generate at least three cones; the target point determination submodule may be configured to determine a target point in the parameter space based on the at least three cones; the target circle determination submodule may be used for converting the target point from the parameter space to the image space to determine a target circle in the image space according to the target point; the center determination submodule may be used for determining a corresponding center and radius of the target edge curve segment according to the target circle.
In some embodiments, the three-point intercept sub-module may include: an aliquoting unit and a point selecting unit.
The bisection unit may be configured to divide the target edge curve segment by N to obtain N bisection segments, where N is an integer greater than or equal to 3; the point selection unit may be configured to select a point from each of the bisected line segments, respectively, to generate at least three points on the target edge curve segment.
In some embodiments, the pupil center determination module 1105 may include: the center distance determination submodule, the first candidate edge curve segment determination submodule, the second candidate edge curve segment determination submodule, the target center determination submodule and the pupil center determination submodule.
The circle center distance determining submodule can be used for determining the circle center distances between every two of the edge curve sections; the first candidate edge curve segment determining submodule may be configured to use an edge curve segment with a center distance smaller than a first threshold value as a first candidate edge curve segment; the second candidate edge curve segment determination submodule may be configured to determine a second candidate edge curve segment from the first candidate edge curve segment having a radius less than a second threshold; the target circle center determining submodule can be used for carrying out average value processing on the circle centers of the second candidate edge curve segments so as to determine the target circle center; the pupil center determination submodule may be used for taking the target circle center as the pupil center of the human eye in the target human eye image.
In some embodiments, the target human eye image comprises a red channel image; wherein the iris edge curve determination module 1102 may include; the eye line drawing determining sub-module, the human eye area image determining sub-module, the iris edge curve expansion drawing determining sub-module and the screening sub-module.
The eye line drawing determining submodule can be used for carrying out edge detection on the target human eye image so as to obtain an eye line drawing, wherein the eye line drawing comprises a non-iris edge curve and noise; the human eye region image determining sub-module may be configured to determine a human eye region image in the target human eye image in the red channel image; the iris edge curve expansion map determining submodule can be used for carrying out human eye edge extraction and expansion processing on the human eye area image so as to obtain an iris edge curve expansion map; the screening submodule can be used for screening the eye line drawing through the iris edge curve expansion diagram so as to obtain the target edge of the target human eye image.
In some embodiments, the ocular line drawing determination submodule may include: the device comprises a gradient intensity determining unit, a local maximum value pixel determining unit, a local non-maximum value determining unit, a target value determining unit, a non-edge pixel determining unit and an eye line drawing.
The gradient strength determining unit may be configured to determine gradient strengths of all pixels in the target human eye image in a target direction; the local maximum pixel determination unit may be configured to determine, in the target human eye image, a local maximum pixel whose pixel value is a local maximum according to a gradient intensity of each pixel in the target direction; the local non-maximum value determination unit may be configured to determine a local non-maximum value pixel in the target human eye image from the maximum value pixel; the target value determination unit may be configured to set a pixel value of the local non-maximum pixel as a target value; the non-edge pixel determination unit may be configured to determine, from the local maximum pixels, non-edge pixels whose gradient strength in the target direction is smaller than a third threshold; the eye line drawing may be used to set the pixel value of the non-edge pixel as a target value to perform edge detection on the target human eye image, so as to obtain the eye line drawing.
In some embodiments, the iris edge curve expansion map determination submodule may include: binarization unit, iris foreground image determining unit and expansion unit.
The binarization unit can be used for performing binarization processing on the human eye area image to obtain a binarized image; the iris foreground image determining unit can be used for performing inverse and open operation processing on the binarized image to obtain an iris foreground image; the expansion unit may be configured to perform edge extraction and expansion processing on the iris foreground image to obtain the human eye edge expansion map.
Since the functions of the apparatus 1100 are described in detail in the corresponding method embodiments, the disclosure is not repeated herein.
The modules (and/or sub-modules and/or units) involved in the embodiments of the present application may be implemented in software or in hardware. The described modules (and/or sub-modules and/or units) may also be provided in a processor. Wherein the names of the modules (and/or sub-modules and/or units) do not constitute a definition of the module (and/or sub-modules and/or units themselves in some cases.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Fig. 12 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. It should be noted that the electronic device 1200 shown in fig. 12 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 12, the electronic apparatus 1200 includes a Central Processing Unit (CPU) 1201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the electronic apparatus 1200 are also stored. The CPU 1201, ROM 1202, and RAM 1203 are connected to each other through a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
The following components are connected to the I/O interface 1205: an input section 1206 including a keyboard, a mouse, and the like; an output portion 1207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1208 including a hard disk or the like; and a communication section 1209 including a network interface card such as a LAN card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet. The drive 1210 is also connected to the I/O interface 1205 as needed. A removable medium 1211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 1210 so that a computer program read out therefrom is installed into the storage section 1208 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1209, and/or installed from the removable media 1211. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1201.
It should be noted that the computer readable storage medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
As another aspect, the present application also provides a computer-readable storage medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer-readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: acquiring a target human eye image; performing edge detection on the target human eye image, and determining an iris edge curve of the target human eye image; truncating the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a circular boundary; respectively determining the circle center and the radius of each edge curve segment; and determining the pupil center of the human eye in the target human eye image according to the circle centers and the radiuses of the edge curve segments.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations of the above-described embodiments.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, aspects of the disclosed embodiments may be embodied in a software product, which may be stored on a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), comprising instructions for causing a computing device (may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to perform a method according to embodiments of the disclosure, e.g., one or more of the steps shown in fig. 2,3, 5, 7.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not to be limited to the details of construction, the manner of drawing, or the manner of implementation, which has been set forth herein, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (9)
1. A viewpoint positioning method, comprising:
Acquiring a target human eye image;
performing edge detection on the target human eye image, and determining an iris edge curve of the target human eye image;
truncating the iris edge curve into a plurality of edge curve segments, wherein each edge curve segment is a circular boundary;
Respectively determining the circle center and the radius of each edge curve segment;
determining the pupil center of the human eye in the target human eye image according to the circle centers and the radiuses of the curve segments of the edges;
wherein the plurality of edge curve segments includes a target edge curve segment; wherein, confirm centre of a circle and radius of each marginal curve section respectively, include:
Selecting at least three points from the target edge curve segment, wherein the target edge curve segment is in an image space;
converting at least three points on the target edge curve segment from the image space to a parameter space, and generating at least three cones in the parameter space by taking the at least three points as bottom surface circle centers respectively, wherein one point corresponds to one cone;
taking one point at which the at least three cones intersect or the point at which the most cones intersect as a target point in the parameter space;
Converting the target point from the parameter space to the image space, and determining a target circle in the image space according to the target point;
and taking the circle center and the radius of the target circle as the circle center and the radius of the target edge curve segment.
2. The method of claim 1, wherein selecting at least three points from the target edge curve segment comprises:
N equal division is carried out on the target edge curve segment so as to obtain N equal division line segments, wherein N is an integer greater than or equal to 3;
and selecting one point from each bisection line segment respectively to generate at least three points on the target edge curve segment.
3. The method of claim 1, wherein determining the pupil center of the human eye in the target human eye image based on the center and radius of each edge curve segment comprises:
Determining the center distances between every two of the edge curve sections;
taking an edge curve segment with the center distance smaller than a first threshold value as a first candidate edge curve segment;
Determining a second candidate edge curve segment with a radius smaller than a second threshold value from the first candidate edge curve segment;
performing average value processing on the circle centers of the second candidate edge curve segments to determine target circle centers;
and taking the target circle center as the pupil center of the human eye in the target human eye image.
4. The method of claim 1, wherein the target human eye image comprises a red channel image; performing edge detection on the target human eye image to determine an iris edge curve of the target human eye image, wherein the method comprises the following steps of;
performing edge detection on the target human eye image to obtain an eye line drawing, wherein the eye line drawing comprises an iris edge curve and noise;
determining a human eye region image in the target human eye image in the red channel image;
extracting and expanding human eye edges of the human eye region image to obtain an iris edge curve expansion diagram;
and screening the eye line drawing through the iris edge curve expansion diagram to obtain the target edge of the target human eye image.
5. The method of claim 4, wherein edge detection of the target human eye image to obtain an eye line drawing comprises:
determining gradient strength of all pixels in the target human eye image in a target direction;
determining local maximum pixels of which the pixel values are local maxima in the target human eye image according to gradient intensities of the pixels in the target direction;
Determining local non-maximum pixels in the target human eye image according to the local maximum pixels;
setting a pixel value of the local non-maximum pixel as a target value;
Determining non-edge pixels with gradient strength in the target direction smaller than a third threshold value from the local maximum value pixels;
and setting the pixel value of the non-edge pixel as a target value to perform edge detection on the target human eye image, so as to obtain the eye line drawing.
6. The method of claim 4, wherein performing a human eye edge extraction and dilation process on the human eye region image to obtain an iris edge curve dilation map comprises:
performing binarization processing on the human eye region image to obtain a binarized image;
performing inverse and open operation processing on the binarized image to obtain an iris foreground image;
And carrying out edge extraction and expansion processing on the iris foreground image to obtain the human eye edge expansion map.
7. A viewpoint positioning device, characterized by comprising:
The human eye image acquisition module is used for acquiring a target human eye image;
The iris edge curve determining module is used for carrying out edge detection on the target human eye image and determining an iris edge curve of the target human eye image;
an edge curve segment determining module, configured to truncate the iris edge curve into a plurality of edge curve segments, where each edge curve segment is a circular boundary;
the radius determining module is used for determining the circle center and the radius of each edge curve segment respectively;
The pupil center determining module is used for determining the pupil center of the human eye in the target human eye image according to the circle centers and the radiuses of the edge curve sections;
wherein the plurality of edge curve segments includes a target edge curve segment; wherein, confirm centre of a circle and radius of each marginal curve section respectively, include:
Selecting at least three points from the target edge curve segment, wherein the target edge curve segment is in an image space;
converting at least three points on the target edge curve segment from the image space to a parameter space, and generating at least three cones in the parameter space by taking the at least three points as bottom surface circle centers respectively, wherein one point corresponds to one cone;
taking one point at which the at least three cones intersect or the point at which the most cones intersect as a target point in the parameter space;
Converting the target point from the parameter space to the image space, and determining a target circle in the image space according to the target point;
and taking the circle center and the radius of the target circle as the circle center and the radius of the target edge curve segment.
8. An electronic device, comprising:
A memory; and
A processor coupled to the memory, the processor being configured to perform the viewpoint positioning method of any of claims 1-6 based on instructions stored in the memory.
9. A computer readable storage medium having stored thereon a program which, when executed by a processor, implements the viewpoint positioning method as claimed in any one of claims 1-6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110772995.3A CN113378790B (en) | 2021-07-08 | 2021-07-08 | Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110772995.3A CN113378790B (en) | 2021-07-08 | 2021-07-08 | Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113378790A CN113378790A (en) | 2021-09-10 |
| CN113378790B true CN113378790B (en) | 2024-06-11 |
Family
ID=77581356
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110772995.3A Active CN113378790B (en) | 2021-07-08 | 2021-07-08 | Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113378790B (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114280784A (en) * | 2021-12-22 | 2022-04-05 | 歌尔光学科技有限公司 | VR head display lens adjusting device and method |
| CN114419017B (en) * | 2022-01-25 | 2025-03-28 | 安健科技(重庆)有限公司 | A method and terminal for identifying a beam limiter region in an X-ray image |
| CN115457646B (en) * | 2022-09-22 | 2025-11-07 | 中国人民解放军空军特色医学中心 | Device, method and related product for identifying lesions around fundus |
| CN117373103B (en) * | 2023-10-18 | 2024-05-07 | 北京极溯光学科技有限公司 | Image feature extraction method, device, equipment and storage medium |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002056394A (en) * | 2000-08-09 | 2002-02-20 | Matsushita Electric Ind Co Ltd | Eye position detection method and eye position detection device |
| CA2437345A1 (en) * | 2001-02-09 | 2002-08-22 | Kabushiki Kaisha Topcon | Eye characteristics measuring apparatus |
| CN105225216A (en) * | 2014-06-19 | 2016-01-06 | 江苏天穗农业科技有限公司 | Based on the Iris preprocessing algorithm of space apart from circle mark rim detection |
| CN106203358A (en) * | 2016-07-14 | 2016-12-07 | 北京无线电计量测试研究所 | A kind of iris locating method and equipment |
| CN107341467A (en) * | 2017-06-30 | 2017-11-10 | 广东欧珀移动通信有限公司 | Iris collection method and device, electronic device and computer-readable storage medium |
| CN107871322A (en) * | 2016-09-27 | 2018-04-03 | 北京眼神科技有限公司 | Iris image segmentation method and device |
| CN109740491A (en) * | 2018-12-27 | 2019-05-10 | 北京旷视科技有限公司 | A human eye sight recognition method, device, system and storage medium |
| CN110189350A (en) * | 2019-06-04 | 2019-08-30 | 京东方科技集团股份有限公司 | A kind of the determination method, apparatus and storage medium of pupil edge |
| CN112184744A (en) * | 2020-11-29 | 2021-01-05 | 惠州高视科技有限公司 | Display screen edge defect detection method and device |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8682073B2 (en) * | 2011-04-28 | 2014-03-25 | Sri International | Method of pupil segmentation |
| US10909363B2 (en) * | 2019-05-13 | 2021-02-02 | Fotonation Limited | Image acquisition system for off-axis eye images |
-
2021
- 2021-07-08 CN CN202110772995.3A patent/CN113378790B/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002056394A (en) * | 2000-08-09 | 2002-02-20 | Matsushita Electric Ind Co Ltd | Eye position detection method and eye position detection device |
| CA2437345A1 (en) * | 2001-02-09 | 2002-08-22 | Kabushiki Kaisha Topcon | Eye characteristics measuring apparatus |
| CN105225216A (en) * | 2014-06-19 | 2016-01-06 | 江苏天穗农业科技有限公司 | Based on the Iris preprocessing algorithm of space apart from circle mark rim detection |
| CN106203358A (en) * | 2016-07-14 | 2016-12-07 | 北京无线电计量测试研究所 | A kind of iris locating method and equipment |
| CN107871322A (en) * | 2016-09-27 | 2018-04-03 | 北京眼神科技有限公司 | Iris image segmentation method and device |
| CN107341467A (en) * | 2017-06-30 | 2017-11-10 | 广东欧珀移动通信有限公司 | Iris collection method and device, electronic device and computer-readable storage medium |
| CN109740491A (en) * | 2018-12-27 | 2019-05-10 | 北京旷视科技有限公司 | A human eye sight recognition method, device, system and storage medium |
| CN110189350A (en) * | 2019-06-04 | 2019-08-30 | 京东方科技集团股份有限公司 | A kind of the determination method, apparatus and storage medium of pupil edge |
| CN112184744A (en) * | 2020-11-29 | 2021-01-05 | 惠州高视科技有限公司 | Display screen edge defect detection method and device |
Non-Patent Citations (3)
| Title |
|---|
| Supakit Fuangkaew ; Karn Patanukhom.Eye State Detection and Eye Sequence Classification for Paralyzed Patient Interaction.《2013 2nd IAPR Asian Conference on Pattern Recognition》.2013,全文. * |
| 基于灰度曲线法及改进Hough变换的虹膜定位算法;王伟;陆莹;刘伟;;高师理科学刊;20160228(第02期);全文 * |
| 王宇鸿 ; 王健庆.基于视点跟踪的人机交互技术的研究.《现代信息科技》.2020,全文. * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113378790A (en) | 2021-09-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113378790B (en) | Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium | |
| CN108038880B (en) | Method and apparatus for processing image | |
| CN110874594B (en) | Human body appearance damage detection method and related equipment based on semantic segmentation network | |
| Ma et al. | A saliency prior context model for real-time object tracking | |
| WO2019218824A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
| CN112381104B (en) | Image recognition method, device, computer equipment and storage medium | |
| WO2021139324A1 (en) | Image recognition method and apparatus, computer-readable storage medium and electronic device | |
| US10467743B1 (en) | Image processing method, terminal and storage medium | |
| CN112750162B (en) | Target identification positioning method and device | |
| Cheng et al. | Crosswalk navigation for people with visual impairments on a wearable device | |
| WO2020078119A1 (en) | Method, device and system for simulating user wearing clothing and accessories | |
| Khan et al. | Automatic localization of pupil using eccentricity and iris using gradient based method | |
| Jung et al. | Eye detection under varying illumination using the retinex theory | |
| JP2023539483A (en) | Medical image processing methods, devices, equipment, storage media and computer programs | |
| US20190147226A1 (en) | Method, system and apparatus for matching a person in a first image to a person in a second image | |
| Singh et al. | Combination of Kullback–Leibler divergence and Manhattan distance measures to detect salient objects | |
| CN108875704B (en) | Method and apparatus for processing image | |
| Li et al. | Location and model reconstruction algorithm for overlapped and sheltered spherical fruits based on geometry | |
| US9501710B2 (en) | Systems, methods, and media for identifying object characteristics based on fixation points | |
| CN113780322B (en) | Safety detection method and device | |
| Matveev et al. | Detecting precise iris boundaries by circular shortest path method | |
| Lee et al. | Implementation of age and gender recognition system for intelligent digital signage | |
| KR102506037B1 (en) | Pointing method and pointing system using eye tracking based on stereo camera | |
| Muddamsetty et al. | Spatio-temporal saliency detection in dynamic scenes using local binary patterns | |
| CN117137427A (en) | Vision detection method and device based on VR and intelligent glasses |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| TA01 | Transfer of patent application right | ||
| TA01 | Transfer of patent application right |
Effective date of registration: 20220210 Address after: 100007 room 205-32, floor 2, building 2, No. 1 and No. 3, qinglonghutong a, Dongcheng District, Beijing Applicant after: Tianyiyun Technology Co.,Ltd. Address before: No.31, Financial Street, Xicheng District, Beijing, 100033 Applicant before: CHINA TELECOM Corp.,Ltd. |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |