[go: up one dir, main page]

CN111369496A - Pupil center positioning method based on star ray - Google Patents

Pupil center positioning method based on star ray Download PDF

Info

Publication number
CN111369496A
CN111369496A CN202010098080.4A CN202010098080A CN111369496A CN 111369496 A CN111369496 A CN 111369496A CN 202010098080 A CN202010098080 A CN 202010098080A CN 111369496 A CN111369496 A CN 111369496A
Authority
CN
China
Prior art keywords
pupil
area
star
edge
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010098080.4A
Other languages
Chinese (zh)
Other versions
CN111369496B (en
Inventor
韩慧妍
马启玮
韩燮
杨婷
李俊伯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN202010098080.4A priority Critical patent/CN111369496B/en
Publication of CN111369496A publication Critical patent/CN111369496A/en
Application granted granted Critical
Publication of CN111369496B publication Critical patent/CN111369496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a pupil center positioning method based on a star ray. The method comprises the steps of firstly preprocessing a picture, then detecting a pupil area through a star ray algorithm, obtaining a binarization threshold value in the ROI area through an iteration method, optimizing the pupil area after binarization, and removing star ray edge point errors. The pupil center under the shielding condition can be accurately detected by the method. The invention better meets the requirements of real-time performance and robustness.

Description

一种基于星射线的瞳孔中心定位方法A pupil center localization method based on star rays

技术领域technical field

本发明属于图像处理技术领域,具体涉及一种基于星射线的瞳孔中心定位方法。The invention belongs to the technical field of image processing, and in particular relates to a pupil center positioning method based on star rays.

背景技术Background technique

瞳孔定位是许多计算机视觉应用的第一步也是最重要的一步。例如人脸识别,人脸特征追踪,面部表情分析,同时虹膜检测和定位也离不开瞳孔定位。瞳孔中心定位的精度将直接且深刻地影响着下一步的处理和分析。视线追踪的第一步即为瞳孔中心定位。视线追踪系统为实时认知处理和信息传输提供了一个强大的分析工具。视线追踪大约有两个重要的应用领域:诊断分析和人机交互。用于诊断的眼动追踪系统为记录阅读者的视线提供了一个客观的、量化的有力途径。由眼动追踪系统提供的这些信息,在诸多领域具有重大应用价值。Pupil localization is the first and most important step in many computer vision applications. For example, face recognition, facial feature tracking, facial expression analysis, and iris detection and positioning are also inseparable from pupil positioning. The accuracy of pupil center positioning will directly and profoundly affect the next processing and analysis. The first step in gaze tracking is to locate the center of the pupil. Gaze tracking systems provide a powerful analytical tool for real-time cognitive processing and information transfer. There are roughly two important application areas for gaze tracking: diagnostic analysis and human-computer interaction. Eye-tracking systems for diagnosis provide an objective, quantitative, and powerful way to record a reader's gaze. This information provided by eye tracking systems has significant application value in many fields.

目前瞳孔中心定位的方法主要分为基于特征的方法和基于统计学习的方法;基于特征的方法又分为基于积分投影的方法和hough圆检测;基于形状的方法主要通过大量的数据进行模型的训练以达到瞳孔中心定位的效果。积分投影法只计算图像的灰度值,计算量小,容易受到光照,睫毛,眼睑遮挡等的影响,因此检测误差较大。Hough圆检测,计算量大,难以满足实时性的要求。基于统计学习的方法,需要大量的数据来进行模型的训练,工作量较大。其学习模型比较复杂,无法满足精确定位瞳孔中心的要求。At present, pupil center positioning methods are mainly divided into feature-based methods and statistical learning-based methods; feature-based methods are further divided into integral projection-based methods and hough circle detection; shape-based methods mainly use a large amount of data for model training. In order to achieve the effect of centering the pupil. The integral projection method only calculates the gray value of the image, and the calculation amount is small, and it is easily affected by illumination, eyelashes, eyelid occlusion, etc., so the detection error is large. Hough circle detection requires a large amount of calculation and is difficult to meet the requirements of real-time performance. The method based on statistical learning requires a large amount of data to train the model, and the workload is large. Its learning model is relatively complex and cannot meet the requirements of accurately locating the pupil center.

发明内容SUMMARY OF THE INVENTION

针对上述问题本发明提供了一种基于星射线的瞳孔中心定位方法。In view of the above problems, the present invention provides a method for locating pupil center based on star rays.

为了达到上述目的,本发明采用了下列技术方案:In order to achieve the above object, the present invention has adopted the following technical solutions:

一种基于星射线的瞳孔中心定位方法,先通过图片预处理,再通过星射线算法,检测瞳孔区域,在ROI区域内通过迭代法,得到二值化的阈值,优化二值化后的瞳孔区域,剔除星射线边缘点误差。A pupil center positioning method based on star rays, first through image preprocessing, and then through star ray algorithm, the pupil area is detected, and the binarized threshold is obtained through an iterative method in the ROI area, and the binarized pupil area is optimized. , to eliminate the error of star-ray edge points.

进一步,所述图片预处理的具体操作为:通过红外摄像头采集数据,将采集的视频数据转化为灰度图像,对灰度图像做中值滤波。通过中值滤波可以减小睫毛的影响,是有效提取二值化瞳孔区域的基础。Further, the specific operations of the picture preprocessing are: collecting data through an infrared camera, converting the collected video data into a grayscale image, and performing median filtering on the grayscale image. Median filtering can reduce the influence of eyelashes, which is the basis for effectively extracting the binarized pupil area.

进一步,所述通过星射线算法,检测瞳孔区域的具体操作为:将yolov3检测到的瞳孔区域的中心点作为瞳孔区域内部的点;Yolov3速度快,模型简单,可以很好的检测小物体,是一种实时性的目标检测模型。通过yolov3可以实时的检测到瞳孔区域。Further, the specific operation of detecting the pupil area through the star ray algorithm is: taking the center point of the pupil area detected by yolov3 as the point inside the pupil area; Yolov3 has a fast speed and a simple model, and can detect small objects well. A real-time object detection model. The pupil area can be detected in real time through yolov3.

进一步,所述在ROI区域内通过迭代法,得到二值化的阈值,算法步骤如下:Further, the binarized threshold is obtained by an iterative method in the ROI area, and the algorithm steps are as follows:

步骤1,求出瞳孔区域内灰度的平均值,记为a,令初始阈值为a;Step 1, find the average value of the gray level in the pupil area, denoted as a, let the initial threshold be a;

步骤2,统计大于阈值a的像素平均值记为b,小于阈值a的像素平均值记为c;Step 2, the average value of the pixels greater than the threshold value a is recorded as b, and the average value of the pixels less than the threshold value a is recorded as c;

步骤3,求出新阈值a=(b+c)/2;Step 3, find out the new threshold a=(b+c)/2;

步骤4,若a(k)=a(k+1),则所得即为阈值;否则继续执行步骤2的操作,进行迭代计算。瞳孔区域的灰度值比其它区域的灰度值低,所以通过迭代法可以自适应的得到瞳孔区域的二值化阈值,通过该方法,可以有效的提取瞳孔。Step 4, if a(k)=a(k+1), then the obtained value is the threshold; otherwise, the operation of step 2 is continued to perform iterative calculation. The gray value of the pupil area is lower than that of other areas, so the binarization threshold of the pupil area can be adaptively obtained by the iterative method, and the pupil can be effectively extracted by this method.

进一步,所述优化二值化后的瞳孔区域的具体操作为:将二值化后的瞳孔区域进行中值滤波,去除边缘的毛糙,使边缘尽可能的光滑;对反光区域进行填充,通过扫描线算法即从上到下,从左到右对瞳孔区域内的空洞进行有效的填充。然后以瞳孔区域中心点为中心,每隔20°发出射线,得到18个瞳孔边缘点。通过中值滤波可以有效的平滑二值化图像的边缘,进而通过星射线模型可以提取较为准确的瞳孔边缘;扫描线算法可以对瞳孔区域内部的空洞进行有效的填充,可以防止提取的瞳孔边缘点为瞳孔内部的点。通过这一步提取到的瞳孔边缘点为粗提取的瞳孔边缘点。Further, the specific operation of optimizing the binarized pupil area is as follows: median filtering is performed on the binarized pupil area to remove the roughness of the edge and make the edge as smooth as possible; fill the reflective area and scan the The line algorithm effectively fills the holes in the pupil area from top to bottom and from left to right. Then take the center point of the pupil area as the center, and send out rays every 20° to get 18 pupil edge points. The edge of the binarized image can be effectively smoothed by median filtering, and more accurate pupil edges can be extracted by the star ray model; the scan line algorithm can effectively fill the holes in the pupil area, which can prevent the extraction of pupil edge points. is the point inside the pupil. The pupil edge points extracted through this step are coarsely extracted pupil edge points.

进一步,所述剔除星射线有误差的边缘点,误差点分别为:眼睑遮挡边缘点、反光区域的边缘点和二值化图像离群点。Further, the edge points with errors in the star rays are eliminated, and the error points are respectively: edge points of eyelid occlusion, edge points of reflective areas, and outlier points of the binarized image.

再进一步,所述眼睑遮挡边缘点为上眼睑遮挡边缘点,上眼睑遮挡边缘点的剔除方法为:计算初始瞳孔边缘点之间的斜率h1,h2,h3.......;计算瞳孔中心以上的边缘点的斜率;剔除遮挡处的瞳孔边缘点。眼睑遮挡处的瞳孔边缘点斜率较低,趋近于0°非遮挡处的瞳孔边缘点斜率较高,与遮挡处的边缘点,呈现明显的不同。Still further, the eyelid occlusion edge point is the upper eyelid occlusion edge point, and the method for culling the upper eyelid occlusion edge point is: calculating the slopes between the initial pupil edge points h 1 , h 2 , h 3 ...... ; Calculate the slope of edge points above the pupil center; cull pupil edge points at occlusions. The slope of the pupil edge point where the eyelid is occluded is lower, and the slope of the pupil edge point at the non-occlusion point close to 0° is higher, which is obviously different from the edge point at the occlusion point.

再进一步,所述二值化图像离群点的剔除方法为:瞳孔为近似椭圆,合理的瞳孔边缘点应该符合椭圆的分布;从椭圆中心发射处的射线的斜率应该与瞳孔边缘处的点的切向方向垂直;剔除不与瞳孔中心点发射的射线的斜率不垂直的点;且误差较大的点。通过这一步可以有效的剔除眼睑遮挡边缘点和二值化图像离群点,计算量小,速度快。Further, the method for removing outliers from the binarized image is as follows: the pupil is an approximate ellipse, and reasonable pupil edge points should conform to the distribution of the ellipse; the slope of the ray emitted from the center of the ellipse should be the same as the point at the pupil edge. The tangential direction is vertical; the points that are not perpendicular to the slope of the rays emitted by the pupil center point are excluded; and the points with a large error are excluded. Through this step, the edge points of eyelid occlusion and the outlier points of the binarized image can be effectively eliminated, and the calculation amount is small and the speed is fast.

再进一步,所述反光区域边缘点的剔除方法为:先将剔除了误差点的瞳孔边缘点,进行椭圆拟合;因为反光区域的点距离拟合后的瞳孔中心较近;剔除距离瞳孔中心最近的点,进而将反光区域边缘点剔除。通过这一步可以有效的剔除反光区域的边缘点,使拟合出来的椭圆更为接近瞳孔,进而有效的定位瞳孔中心点。Further, the method for removing edge points in the reflective area is as follows: first, the edge points of the pupil with the error points removed are subjected to ellipse fitting; because the points in the reflective area are closer to the center of the pupil after fitting; , and then culling the edge points of the reflective area. Through this step, the edge points of the reflective area can be effectively eliminated, so that the fitted ellipse is closer to the pupil, and then the center point of the pupil can be effectively located.

与现有技术相比本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

采用由粗到精的策略,通过逐步剔除误差较大的瞳孔边缘点,进而精确瞳孔中心点。本发明在有遮挡的情况下,可以精确的定位瞳孔中心点。By adopting a coarse-to-fine strategy, the pupil center point is refined by gradually eliminating the pupil edge points with larger errors. The present invention can accurately locate the center point of the pupil under the condition of occlusion.

Yolov3速度快,模型简单,可以很好的检测小物体,是一种实时性的目标检测模型;迭代法可以自适应的得到瞳孔区域的二值化阈值,可以有效的提取瞳孔;通过中值滤波可以有效的平滑二值化图像的边缘,进而通过星射线模型可以提取较为准确的瞳孔边缘;扫描线算法可以对瞳孔区域内部的空洞进行有效的填充,可以防止提取的瞳孔边缘点为瞳孔内部的点。Yolov3 has fast speed, simple model, and can detect small objects very well. It is a real-time target detection model; the iterative method can adaptively obtain the binarization threshold of the pupil area, and can effectively extract the pupil; through median filtering It can effectively smooth the edge of the binarized image, and then use the star ray model to extract more accurate pupil edges; the scan line algorithm can effectively fill the holes in the pupil area, which can prevent the extracted pupil edge points from being inside the pupil. point.

本发明计算量小、速度快,可以较好的满足实时性,瞳孔中心定位精度高,鲁棒性高,可以适用于不同的个体。The invention has small calculation amount and high speed, can better satisfy real-time performance, has high pupil center positioning accuracy and high robustness, and can be applied to different individuals.

附图说明Description of drawings

图1为本发明流程图;Fig. 1 is the flow chart of the present invention;

图2为通过yolov3进行瞳孔区域的检测图;Fig. 2 is the detection diagram of pupil area by yolov3;

图3为通过迭代法的二值化图;Fig. 3 is the binarization graph by iterative method;

图4为优化二值化后的瞳孔区域图;Fig. 4 is the pupil area map after optimized binarization;

图5为对瞳孔区域内的空洞有效填充图;Fig. 5 is an effective filling diagram for the cavity in the pupil area;

图6为星射线边缘点误差图;Fig. 6 is the error map of star ray edge point;

图7为剔除遮挡处瞳孔边缘点的图像;Fig. 7 is the image of culling the pupil edge point of occlusion;

图8为二值化图像离群点的剔除图;Fig. 8 is the elimination diagram of outliers of the binarized image;

图9为第一次拟合的效果和第二次拟合的效果图;Fig. 9 is the effect of the first fitting and the effect diagram of the second fitting;

图10为本发明瞳孔中心点精确检测图。FIG. 10 is a diagram showing the accurate detection of the pupil center point of the present invention.

具体实施方式Detailed ways

实施例1Example 1

本发明一种基于星射线的瞳孔中心定位方法,本发明采用1*7的中值滤波模板。The present invention is a method for locating pupil center based on star rays, and the present invention adopts a 1*7 median filter template.

1图片预处理1 Image preprocessing

2瞳孔区域的检测2 Detection of pupil area

通过yolov3进行瞳孔区域的检测,检测效果如图2所示:将yolov3检测到的瞳孔区域的中心点作为瞳孔区域内部的点;以瞳孔区域中心点为中心,每隔20°发出射线,得到18个瞳孔边缘点。通过yolov3可以准确高效的检测到瞳孔区域。The pupil area is detected by yolov3, and the detection effect is shown in Figure 2: the center point of the pupil area detected by yolov3 is taken as the point inside the pupil area; taking the center point of the pupil area as the center, rays are emitted every 20°, and 18 pupil edge point. The pupil area can be detected accurately and efficiently by yolov3.

3瞳孔区域二值化3 Pupil area binarization

在ROI区域内通过迭代法求得二值化阈值,算法步骤如下:The binarization threshold is obtained by an iterative method in the ROI area. The algorithm steps are as follows:

1).求出区域内灰度的平均值,记为a,令初始阈值为a;1). Find the average value of the gray level in the area, denoted as a, and set the initial threshold to be a;

2).统计大于阈值a的像素平均值记为b,小于阈值a的像素平均值记为c;2). The average value of the pixels greater than the threshold value a is recorded as b, and the average value of the pixels less than the threshold value a is recorded as c;

3).求出新阈值a=(b+c)/2;3). Find the new threshold a=(b+c)/2;

4).若a(k)=a(k+1),则所得即为阈值;否则转2,迭代计算。4). If a(k)=a(k+1), the result is the threshold; otherwise, go to 2 and iterative calculation.

通过迭代法的二值化图像如图3所示:The binarized image through the iterative method is shown in Figure 3:

3.1优化二值化后的瞳孔区域3.1 Optimize the pupil area after binarization

将二值化后的瞳孔区域进行中值滤波,去除边缘的毛糙,使边缘尽可能的光滑。对比如图4所示;从图4可以看出经过中值滤波后瞳孔的边缘变得平滑,减少了后续瞳孔边缘点提取的误差。对反光区域进行填充,通过扫描线算法(从上到下,从左到右)对瞳孔区域内的空洞进行有效的填充。效果如图5所示;从图5可以看出瞳孔内部的区域进行了有效的填充。Median filtering is performed on the binarized pupil area to remove the roughness of the edge and make the edge as smooth as possible. The comparison is shown in Figure 4; it can be seen from Figure 4 that the edge of the pupil becomes smooth after median filtering, which reduces the error of subsequent extraction of pupil edge points. The reflective area is filled, and the hole in the pupil area is effectively filled by the scan line algorithm (from top to bottom, from left to right). The effect is shown in Figure 5; it can be seen from Figure 5 that the area inside the pupil is effectively filled.

4星射线边缘点误差:4 star ray edge point error:

眼睑遮挡边缘处的点;反光区域的边缘点;由二值化图像得到的离群点;Points at the edge of eyelid occlusion; edge points in reflective areas; outliers obtained from binarized images;

分别如图6所示:从图6可以看出为了提高瞳孔中心点的精度,必须剔除这三种瞳孔边缘点。As shown in Figure 6, it can be seen from Figure 6 that in order to improve the accuracy of the pupil center point, these three pupil edge points must be eliminated.

4.1上眼睑遮挡边缘点的剔除4.1 Culling of edge points of upper eyelid occlusion

计算初始瞳孔边缘点之间的斜率h1,h2,h3.......;从图6可以看出,遮挡主要是上眼睑遮挡。为了减少计算量,只计算瞳孔中心以上的边缘点的斜率;眼睑遮挡处的瞳孔边缘点斜率较低,趋近于0。非遮挡处的瞳孔边缘点斜率较高,与遮挡处的边缘点,呈现明显的不同。剔除遮挡处的瞳孔边缘点效果如图7所示;从图7可以看出上眼睑的遮挡点可以较好的剔除。Calculate the slopes h 1 , h 2 , h 3 ...... In order to reduce the amount of calculation, only the slope of the edge point above the pupil center is calculated; the slope of the pupil edge point where the eyelid is occluded is lower and approaches 0. The slope of the pupil edge point in the non-occlusion place is higher, which is obviously different from the edge point in the occlusion place. Figure 7 shows the effect of culling the pupil edge points at the occluded location; it can be seen from Figure 7 that the occlusion points of the upper eyelid can be better culled.

4.2二值化图像离群点的剔除4.2 Removal of outliers in binarized images

瞳孔为近似椭圆,合理的瞳孔边缘点应该符合椭圆的分布;从椭圆中心发射处的射线的斜率应该与瞳孔边缘处的点的切向方向垂直;剔除不与瞳孔中心点发射的射线的斜率不垂直的点;且误差较大的点;效果如图8所示;从图8可以看出较好的提出了该类误差点。The pupil is an approximate ellipse, and a reasonable pupil edge point should conform to the distribution of the ellipse; the slope of the ray emitted from the center of the ellipse should be perpendicular to the tangential direction of the point at the pupil edge; the slope of the ray that is not emitted from the pupil center point is excluded. Vertical points; and points with larger errors; the effect is shown in Figure 8; it can be seen from Figure 8 that this type of error point is better proposed.

4.3反光区域边缘点的剔除4.3 Culling of edge points in reflective areas

从图6可以看出反光区域的边缘点无法用斜率进行剔除,先将剔除了误差点的瞳孔边缘点,进行椭圆拟合;因为反光区域的点距离拟合后的瞳孔中心较近;剔除距离瞳孔中心最近的点,进而将其剔除。It can be seen from Figure 6 that the edge points of the reflective area cannot be eliminated by the slope. First, the edge points of the pupil with the error points are eliminated, and ellipse fitting is performed; because the points in the reflective area are closer to the center of the pupil after fitting; the elimination distance The closest point to the center of the pupil, which is then culled.

第一次拟合的效果和第二次拟合的效果如图9所示;从图9可以看出瞳孔中心点的定位精度得到了提高;剔除前瞳孔中心为[471.203,148.868],剔除后瞳孔中心为[470.726,147.615]。The effect of the first fitting and the effect of the second fitting are shown in Figure 9; it can be seen from Figure 9 that the positioning accuracy of the pupil center point has been improved; the pupil center before culling is [471.203,148.868], The pupil center is [470.726,147.615].

通过以上三步的剔除,本发明可以精确的检测到遮挡条件下的瞳孔中心点,如图10所示;从图10可以看出本课题的方法高效准确,并且进行了测试,可以较好的满足实时性和鲁棒性的要求。Through the culling of the above three steps, the present invention can accurately detect the pupil center point under the occlusion condition, as shown in Figure 10; it can be seen from Figure 10 that the method of this subject is efficient and accurate, and has been tested, it can be better Meet the requirements of real-time and robustness.

本发明说明书中未作详细描述的内容属于本领域专业技术人员公知的现有技术。尽管上面对本发明说明性的具体实施方式进行了描述,以便于本技术领的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。Contents that are not described in detail in the specification of the present invention belong to the prior art known to those skilled in the art. Although the illustrative specific embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, As long as various changes are within the spirit and scope of the present invention as defined and determined by the appended claims, these changes are obvious, and all inventions and creations utilizing the inventive concept are included in the protection list.

Claims (9)

1.一种基于星射线的瞳孔中心定位方法,其特征在于:先通过图片预处理,再通过星射线算法,检测瞳孔区域,在ROI区域内通过迭代法,得到二值化的阈值,优化二值化后的瞳孔区域,剔除星射线边缘点误差。1. a pupil center positioning method based on star ray, it is characterized in that: first through image preprocessing, then through star ray algorithm, detect pupil area, in ROI area through iterative method, obtain the threshold value of binarization, optimize two. Valued pupil area, eliminating star ray edge point errors. 2.根据权利要求1所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述图片预处理的具体操作为:通过红外摄像头采集数据,将采集的视频数据转化为灰度图像,对灰度图像做中值滤波。2. a kind of pupil center positioning method based on star rays according to claim 1, is characterized in that: the concrete operation of described picture preprocessing is: collect data by infrared camera, convert the video data collected into grayscale images , do median filtering on grayscale images. 3.根据权利要求1所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述通过星射线算法,检测瞳孔区域的具体操作为:将yolov3检测到的瞳孔区域的中心点作为瞳孔区域内部的点;以瞳孔区域中心点为中心,每隔20°发出射线,得到18个瞳孔边缘点。3. a kind of pupil center positioning method based on star ray according to claim 1, is characterized in that: described through star ray algorithm, the concrete operation of detecting pupil area is: the center point of the pupil area that yolov3 detects as Points inside the pupil area; take the center point of the pupil area as the center, and emit rays every 20° to get 18 pupil edge points. 4.根据权利要求1所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述在ROI区域内通过迭代法,得到二值化的阈值,算法步骤如下:4. a kind of pupil center positioning method based on star rays according to claim 1, is characterized in that: described in ROI area by iterative method, obtains the threshold value of binarization, and algorithm steps are as follows: 步骤1,求出瞳孔区域内灰度的平均值,记为a,令初始阈值为a;Step 1, find the average value of the gray level in the pupil area, denoted as a, let the initial threshold be a; 步骤2,统计大于阈值a的像素平均值记为b,小于阈值a的像素平均值记为c;Step 2, the average value of the pixels greater than the threshold value a is recorded as b, and the average value of the pixels less than the threshold value a is recorded as c; 步骤3,求出新阈值a=(b+c)/2;Step 3, find out the new threshold a=(b+c)/2; 步骤4,若a(k)=a(k+1),则所得即为阈值;否则继续执行步骤2的操作,进行迭代计算。Step 4, if a(k)=a(k+1), then the obtained value is the threshold; otherwise, the operation of step 2 is continued to perform iterative calculation. 5.根据权利要求1所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述优化二值化后的瞳孔区域的具体操作为:将二值化后的瞳孔区域进行中值滤波,去除边缘的毛糙,使边缘尽可能的光滑;对反光区域进行填充,通过扫描线算法即从上到下,从左到右对瞳孔区域内的空洞进行有效的填充。5. A star-ray-based pupil center positioning method according to claim 1, wherein the specific operation of optimizing the binarized pupil area is: performing a median of the binarized pupil area Filter, remove the roughness of the edge, and make the edge as smooth as possible; fill the reflective area, and effectively fill the hole in the pupil area from top to bottom and from left to right through the scan line algorithm. 6.根据权利要求1所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述剔除星射线边缘点误差,误差分别为:眼睑遮挡边缘点、反光区域的边缘点和二值化图像离群点。6. a kind of pupil center positioning method based on star ray according to claim 1, is characterized in that: described rejecting star ray edge point error, error is respectively: the edge point of eyelid occlusion edge point, reflective area and binary value image outliers. 7.根据权利要求6所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述眼睑遮挡边缘点为上眼睑遮挡边缘点,上眼睑遮挡边缘点的剔除方法为:计算初始瞳孔边缘点之间的斜率h1,h2,h3.......;计算瞳孔中心以上的边缘点的斜率;剔除遮挡处的瞳孔边缘点。7. a kind of pupil center positioning method based on star rays according to claim 6, is characterized in that: described eyelid occlusion edge point is upper eyelid occlusion edge point, and the culling method of upper eyelid occlusion edge point is: calculate initial pupil Slopes between edge points h 1 , h 2 , h 3 .......; calculate the slope of edge points above the pupil center; culling pupil edge points at occlusions. 8.根据权利要求6所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述二值化图像离群点的剔除方法为:瞳孔为近似椭圆,合理的瞳孔边缘点应该符合椭圆的分布;从椭圆中心发射处的射线的斜率应该与瞳孔边缘处的点的切向方向垂直;剔除不与瞳孔中心点发射的射线的斜率不垂直的点;且误差较大的点。8. a kind of pupil center positioning method based on star rays according to claim 6, is characterized in that: the method for eliminating outliers of described binarized image is: pupil is approximate ellipse, and reasonable pupil edge point should meet The distribution of the ellipse; the slope of the ray emitted from the center of the ellipse should be perpendicular to the tangential direction of the point at the edge of the pupil; the points that are not perpendicular to the slope of the ray emitted from the center of the pupil are eliminated; and the point with a larger error. 9.根据权利要求6所述的一种基于星射线的瞳孔中心定位方法,其特征在于:所述反光区域边缘点的剔除方法为:先将剔除了误差点的瞳孔边缘点,进行椭圆拟合;因为反光区域的点距离拟合后的瞳孔中心较近;剔除距离瞳孔中心最近的点,进而将反光区域边缘点剔除。9. A star-ray-based pupil center positioning method according to claim 6, characterized in that: the method for removing edge points of the reflective area is: first, ellipse fitting is performed on the edge points of the pupil with the error points removed. ; because the points in the reflective area are closer to the center of the pupil after fitting; the points closest to the center of the pupil are eliminated, and then the edge points in the reflective area are eliminated.
CN202010098080.4A 2020-02-18 2020-02-18 Pupil center positioning method based on star ray Active CN111369496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098080.4A CN111369496B (en) 2020-02-18 2020-02-18 Pupil center positioning method based on star ray

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098080.4A CN111369496B (en) 2020-02-18 2020-02-18 Pupil center positioning method based on star ray

Publications (2)

Publication Number Publication Date
CN111369496A true CN111369496A (en) 2020-07-03
CN111369496B CN111369496B (en) 2022-07-01

Family

ID=71210703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098080.4A Active CN111369496B (en) 2020-02-18 2020-02-18 Pupil center positioning method based on star ray

Country Status (1)

Country Link
CN (1) CN111369496B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850130A (en) * 2021-08-23 2021-12-28 宁波荣新安圣机械有限公司 Method and system for locating pupil center of a refractometer
CN114283176A (en) * 2021-12-31 2022-04-05 广东工业大学 Pupil trajectory generation method based on human eye video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520310B2 (en) * 2008-09-26 2013-08-27 Konica Minolta Opto, Inc. Image display device, head-mounted display and head-up display
US8600124B2 (en) * 2004-09-16 2013-12-03 Imatx, Inc. System and method of predicting future fractures
US20170103504A1 (en) * 2015-10-09 2017-04-13 Universidad Nacional Autónoma de México System for the identification and quantification of helminth eggs in environmental samples
CN108509873A (en) * 2018-03-16 2018-09-07 新智认知数据服务有限公司 Pupil image edge point extracting method and device
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN109993749A (en) * 2017-12-29 2019-07-09 北京京东尚科信息技术有限公司 The method and apparatus for extracting target image
CN110348408A (en) * 2019-07-16 2019-10-18 上海博康易联感知信息技术有限公司 Pupil positioning method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600124B2 (en) * 2004-09-16 2013-12-03 Imatx, Inc. System and method of predicting future fractures
US8520310B2 (en) * 2008-09-26 2013-08-27 Konica Minolta Opto, Inc. Image display device, head-mounted display and head-up display
US20170103504A1 (en) * 2015-10-09 2017-04-13 Universidad Nacional Autónoma de México System for the identification and quantification of helminth eggs in environmental samples
CN109993749A (en) * 2017-12-29 2019-07-09 北京京东尚科信息技术有限公司 The method and apparatus for extracting target image
CN108509873A (en) * 2018-03-16 2018-09-07 新智认知数据服务有限公司 Pupil image edge point extracting method and device
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN110348408A (en) * 2019-07-16 2019-10-18 上海博康易联感知信息技术有限公司 Pupil positioning method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CONSTANTIN BARABAŞA: ""Servo Control Based on Pupil Detection Eye Tracking"", 《2018 IEEE 24TH INTERNATIONAL SYMPOSIUM FOR DESIGN AND TECHNOLOGY IN ELECTRONIC PACKAGING​》 *
王军宁 等: ""红外头盔式眼动仪的瞳孔中心定位算法"", 《西安电子科技大学学报(自然科学版)》 *
韩慧妍 等: ""基于骨架的三维点云模型分割"", 《计算机工程与设计》 *
高源: ""基于融合星射线法和椭圆拟合法的瞳孔定位研究"", 《电子世界》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850130A (en) * 2021-08-23 2021-12-28 宁波荣新安圣机械有限公司 Method and system for locating pupil center of a refractometer
CN114283176A (en) * 2021-12-31 2022-04-05 广东工业大学 Pupil trajectory generation method based on human eye video

Also Published As

Publication number Publication date
CN111369496B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
Mukherjee et al. Level set analysis for leukocyte detection and tracking
CN114782499B (en) A method and device for extracting static areas of images based on optical flow and view geometry constraints
CN108133476B (en) Method and system for automatically detecting pulmonary nodules
CN101847265A (en) Method for extracting moving objects and partitioning multiple objects used in bus passenger flow statistical system
CN108960076B (en) Ear recognition and tracking method based on convolutional neural network
CN116030396B (en) An Accurate Segmentation Method for Video Structured Extraction
CN111695373A (en) Zebra crossing positioning method, system, medium and device
CN111369496B (en) Pupil center positioning method based on star ray
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN114332278A (en) OCTA image motion correction method based on deep learning
CN115690551A (en) A dual-light image matching and fusion method and system
Panicker et al. CNN based image descriptor for polycystic ovarian morphology from transvaginal ultrasound
CN108564020B (en) Micro-gesture recognition method based on panoramic 3D images
CN113191421A (en) Gesture recognition system and method based on Faster-RCNN
CN110246125A (en) Teat placement automatic testing method based on ABUS coronal image
CN113780040B (en) Positioning method and device for lip key points, storage medium and electronic equipment
CN114613006A (en) A kind of long-distance gesture recognition method and device
CN110766698B (en) A method for tracking and identifying oscillating apples in dynamic background
CN117351307B (en) Model training method, device, equipment and storage medium
CN113610810B (en) Vascular detection method based on Markov random field
CN111354003A (en) Pig segmentation method based on depth image
Mahadeo et al. Model-based pupil and iris localization
Princye et al. Detection of exudates and feature extraction of retinal images using fuzzy clustering method
CN116664501A (en) A method of judging the change of stored grain based on image processing
Zhao et al. A survey of semen quality evaluation in microscopic videos using computer assisted sperm analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant