[go: up one dir, main page]

CN106218409A - A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device - Google Patents

A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device Download PDF

Info

Publication number
CN106218409A
CN106218409A CN201610575172.0A CN201610575172A CN106218409A CN 106218409 A CN106218409 A CN 106218409A CN 201610575172 A CN201610575172 A CN 201610575172A CN 106218409 A CN106218409 A CN 106218409A
Authority
CN
China
Prior art keywords
human eye
image
face
area
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610575172.0A
Other languages
Chinese (zh)
Inventor
韩毅
肖旭辉
刘伟
魏敬东
邢亚山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201610575172.0A priority Critical patent/CN106218409A/en
Publication of CN106218409A publication Critical patent/CN106218409A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/211Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays producing three-dimensional [3D] effects, e.g. stereoscopic images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/213Virtual instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/80Arrangements for controlling instruments
    • B60K35/81Arrangements for controlling instruments for controlling displays

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种可人眼跟踪的裸眼3D汽车仪表显示方法及装置,其中,所述方法包括:标定双目摄像头,实时获取人脸图像,提去图像中人眼,计算人眼相对仪表显示屏的空间位置,根据人眼位置所在视区调整显示面板和柱状透镜之间的距离,通过调整距离来改变出射光从而达到理想的裸眼3D显示。上述方法能够自动准确地检测出人眼,并给出人眼的空间位置,从而根据人眼的空间位置对裸眼3D汽车仪表显示装置进行调整,使得当前人眼位置处在最佳观看区域。

The invention discloses a naked-eye 3D automobile instrument display method and device capable of human eye tracking, wherein the method includes: calibrating a binocular camera, acquiring a face image in real time, removing the human eye from the image, and calculating the relative instrument of the human eye The spatial position of the display screen, adjust the distance between the display panel and the lenticular lens according to the viewing area where the human eye is located, and change the outgoing light by adjusting the distance to achieve an ideal naked-eye 3D display. The above method can automatically and accurately detect the human eyes, and give the spatial position of the human eyes, so as to adjust the naked-eye 3D car instrument display device according to the spatial position of the human eyes, so that the current position of the human eyes is in the best viewing area.

Description

一种可人眼跟踪的裸眼3D汽车仪表显示方法及装置A naked-eye 3D automotive instrument display method and device that can be tracked by human eyes

技术领域technical field

本发明涉及一种3D汽车仪表显示技术,具体涉及一种可人眼跟踪的裸眼3D汽车仪表显示方法及装置。The invention relates to a 3D automobile instrument display technology, in particular to a naked-eye 3D automobile instrument display method and device that can be tracked by human eyes.

背景技术Background technique

与传统二维显示仪表相比,3D仪表的显示与人的视觉特征更加匹配,使得人们在观看仪表时更富有立体感和沉浸感,并且能更直观实时显示路况信息和车的工况信息。而裸眼3D技术使得驾驶员无需佩戴眼镜就能直接以肉眼观测到三维的车辆数据显示,不会影响行驶的安全性。目前的裸眼3D技术主要有三种,分别是视差屏障式、柱状透镜式和指向光源式。由于视差屏障和指向光源式都存在画面亮度相对较低这一致命缺点,而使得观看体验大打折扣。所以目前来说柱状透镜技术最为合适,它的最大优势就是画面亮度不会因为3D化而下降。柱状透镜3D技术的原理是在液晶显示屏的前面加上一层柱状透镜,使液晶屏的像平面处在透镜的焦平面上,而液晶屏上图像的每个像素点可以分成几个子像素,这样透镜下子像素点就能以不同方向投影出去,当柱状透镜与液晶屏像素列成一定角度时,便可以使每一组子像素重复投射到视区,即可以在几个不同的视区看到3D图像。但柱状透镜技术的缺点就是只能在这几个指定视区内观看到3D图像,并且由于子像素点的增加会使得图像分辨率严重下降影响观看效果,所以无法做到多视点3D效果,也就是说裸眼3D显示器在制作完成后其最佳观看距离与角度也随之固定,要求用户不能随意变动观看距离,十分影响观看感受。Compared with the traditional two-dimensional display instrument, the display of the 3D instrument matches the visual characteristics of the human being more closely, making people feel more three-dimensional and immersive when viewing the instrument, and can display road condition information and vehicle operating condition information more intuitively and in real time. The naked-eye 3D technology allows the driver to directly observe the three-dimensional vehicle data display with the naked eye without wearing glasses, without affecting driving safety. There are three main types of glasses-free 3D technologies at present, namely parallax barrier, lenticular lens and pointing light source. Because both the parallax barrier and the directional light source have the fatal disadvantage of relatively low screen brightness, the viewing experience is greatly reduced. So at present, lenticular lens technology is the most suitable. Its biggest advantage is that the brightness of the screen will not be reduced due to 3D. The principle of lenticular lens 3D technology is to add a layer of lenticular lens in front of the LCD screen, so that the image plane of the LCD screen is on the focal plane of the lens, and each pixel of the image on the LCD screen can be divided into several sub-pixels. In this way, the sub-pixels under the lens can be projected in different directions. When the lenticular lens and the pixel columns of the LCD screen form a certain angle, each group of sub-pixels can be repeatedly projected to the viewing area, that is, it can be viewed in several different viewing areas. to 3D images. However, the disadvantage of lenticular lens technology is that 3D images can only be viewed in these designated viewing areas, and the increase in sub-pixels will seriously reduce the image resolution and affect the viewing effect, so it is impossible to achieve multi-viewpoint 3D effects, and also That is to say, the optimal viewing distance and angle of the naked-eye 3D display are fixed after the production is completed, requiring users not to change the viewing distance at will, which greatly affects the viewing experience.

发明内容Contents of the invention

为了解决上述问题,本发明提供了如下技术方案。In order to solve the above problems, the present invention provides the following technical solutions.

一种可人眼跟踪的裸眼3D汽车仪表显示方法,具体包括以下步骤:A naked-eye 3D car instrument display method that can be tracked by human eyes, specifically comprising the following steps:

步骤1,在车辆驾驶舱前端安装双目摄像头,标定双目摄像头的位置;利用双目摄像头采集同一时刻的左、右两帧图像,分别矫正左、右两帧图像,得到矫正后的图像;Step 1, install a binocular camera at the front of the vehicle cockpit, and calibrate the position of the binocular camera; use the binocular camera to collect two frames of left and right images at the same time, and correct the two frames of left and right images respectively to obtain the corrected image;

步骤2,从矫正后的图像中提取人脸范围区域;Step 2, extracting the range region of the face from the rectified image;

步骤3,在人脸范围区域中提取人眼区域,并对提取的人眼区域进行优化,在优化后的人眼区域中检测虹膜;Step 3, extract the human eye area in the human face range area, and optimize the extracted human eye area, and detect the iris in the optimized human eye area;

步骤4,计算虹膜相对于摄像头所在的空间位置,利用摄像头和仪表显示屏的位置关系以及虹膜相对于摄像头所在的空间位置,计算得到人眼相对仪表显示屏的空间位置。Step 4, calculate the spatial position of the iris relative to the camera, and calculate the spatial position of the human eye relative to the instrument display by using the positional relationship between the camera and the instrument display and the spatial position of the iris relative to the camera.

进一步地,步骤2包括如下子步骤:Further, step 2 includes the following sub-steps:

步骤21,对矫正后的图像进行二值化处理,得到二值化图像;Step 21, performing binarization processing on the corrected image to obtain a binarized image;

步骤22,以二值化图像的水平方向为X轴,以二值化图像中与X轴垂直的方向为Y轴,确定人脸区域在X轴方向的起始点和结束点;Step 22, take the horizontal direction of the binarized image as the X-axis, and take the direction perpendicular to the X-axis in the binarized image as the Y-axis to determine the starting point and the end point of the face area in the X-axis direction;

步骤23,确定人脸区域在Y轴方向的起始点和结束点;Step 23, determine the start point and end point of the face area in the Y-axis direction;

步骤24,通过X轴方向的起始点和结束点、Y轴方向的起始点和结束点,得到框定人脸范围的矩形。Step 24: Obtain a rectangle that frames the range of the human face through the start point and end point in the X-axis direction and the start point and end point in the Y-axis direction.

进一步地,步骤22中所述的确定人脸区域在X轴方向的起始点和结束点的具体步骤为:Further, the specific steps for determining the starting point and the end point of the face area in the X-axis direction described in step 22 are:

步骤221,设定X=b,b=0;Step 221, setting X=b, b=0;

步骤222,获取X=b时,二值化图像中白点的个数记为SumTempx和所有点的个数Sum_p_c;设定阈值pLThresh1,若SumTempx和Sum_p_c的比值大于或等于pLThresh1,则X=b作为人脸区域在X轴方向的起始点,即为人脸的左边界X坐标,记为x_L;Step 222, when obtaining X=b, the number of white points in the binarized image is recorded as SumTempx and the number of all points Sum_p_c; set the threshold pLThresh1, if the ratio of SumTempx to Sum_p_c is greater than or equal to pLThresh1, then X=b As the starting point of the face area in the X-axis direction, it is the X coordinate of the left boundary of the face, denoted as x_L;

若SumTempx和Sum_p_c的比值小于pLThresh1,则对b加上步长x_p,重复步骤222,直到找到人脸的左边界X坐标;If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then add step size x_p to b, and repeat step 222 until the left boundary X coordinate of the face is found;

步骤223,设定X=b,b=0;Step 223, setting X=b, b=0;

步骤224,获取X=b时,二值化图像中白点的个数记为SumTempx和所有点的个数Sum_p_c;设定阈值pLThresh1,若SumTempx和Sum_p_c的比值大于或等于pLThresh1,则X=b作为人脸区域在X轴方向的起始点,即为人脸的右边界X坐标,记为x_R;Step 224, when obtaining X=b, the number of white points in the binarized image is recorded as SumTempx and the number of all points Sum_p_c; set the threshold pLThresh1, if the ratio of SumTempx to Sum_p_c is greater than or equal to pLThresh1, then X=b As the starting point of the face area in the X-axis direction, it is the X coordinate of the right boundary of the face, denoted as x_R;

若SumTempx和Sum_p_c的比值小于pLThresh1,则对b加上步长-x_p,重复步骤224,直到找到人脸的右边界X坐标;If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then add step-x_p to b, and repeat step 224 until the right boundary X coordinate of the face is found;

进一步地,步骤23中所述的确定人脸区域在Y轴方向的起始点和结束点的具体步骤为:Further, the specific steps for determining the starting point and the end point of the face area in the Y-axis direction described in step 23 are:

步骤231,设定Y=c,c=0;Step 231, setting Y=c, c=0;

步骤232,获取Y=c时,二值化图像中白点的个数记为SumTempy和所有点的个数Sum_p_r;设定阈值pLThresh2,若SumTempy和Sum_p_r的比值大于或等于pLThresh2,则Y=c作为人脸区域在Y轴方向的起始点,即为人脸的左边界Y坐标,记为y_U;Step 232, when obtaining Y=c, the number of white points in the binarized image is recorded as SumTempy and the number Sum_p_r of all points; set the threshold pLThresh2, if the ratio of SumTempy and Sum_p_r is greater than or equal to pLThresh2, then Y=c As the starting point of the face area in the Y-axis direction, it is the Y coordinate of the left boundary of the face, denoted as y_U;

若SumTempy和Sum_p_r的比值小于pLThresh2,则对c加上步长y_p,重复步骤232,直到找到人脸的上边界Y坐标;If the ratio of SumTempy and Sum_p_r is less than pLThresh2, then add step size y_p to c, repeat step 232, until finding the upper boundary Y coordinate of people's face;

步骤233,已知x_R、x_L和y_U,由下式可得到人脸下边界坐标:Step 233, given x_R, x_L and y_U, the lower boundary coordinates of the face can be obtained by the following formula:

y_D=y_U-1.36×(x_R-x_L)y_D=y_U-1.36×(x_R-x_L)

进一步地,步骤3中从人脸范围区域中提取人眼区域的方法为:Further, in step 3, the method for extracting the human eye area from the human face range area is:

其中,G(x,y)表示二值化图像中坐标(x,y)处的灰度值,Mh(x)表示二值化图像中坐标(x,y)处的灰度值在[x_L,x_R]区域的水平积分投影曲线;Among them, G(x, y) represents the gray value at coordinates (x, y) in the binarized image, and M h (x) represents the gray value at coordinates (x, y) in the binarized image in [ The horizontal integral projection curve of x_L, x_R] area;

找到所述水平积分投影曲线中与人眼相对应的波谷,利用与该波谷相邻的两个波峰点找到这两个波峰点对应的Y轴坐标k_1和k_2;Find the trough corresponding to the human eye in the horizontal integral projection curve, and use the two peak points adjacent to the trough to find the Y-axis coordinates k_1 and k_2 corresponding to the two peak points;

令y_1=k_2-3/5(k_2-k_1)、y_2=k_2+3/5(k_2-k_1),得到y_1和y_2组成的人眼区域。Set y_1=k_2-3/5(k_2-k_1), y_2=k_2+3/5(k_2-k_1), to obtain the human eye area composed of y_1 and y_2.

进一步地,步骤3中所述的对提取的人眼区域进行优化是指:Further, optimizing the extracted human eye region described in step 3 refers to:

步骤31,利用高斯滤波对人眼区域进行处理,得到平滑图像;Step 31, using Gaussian filtering to process the human eye area to obtain a smooth image;

步骤32,对平滑图像中的像素进行梯度幅值和方向的计算,再进行极大值抑制,得到非极大值抑制图像:具体操作如下:Step 32, calculate the gradient magnitude and direction of the pixels in the smooth image, and then perform maximum value suppression to obtain a non-maximum value suppression image: the specific operation is as follows:

依次选取平滑图像中的每个像素点作为当前像素点,若当前像素点的幅值大于其梯度方向上相邻的两个像素点的幅值,则该当前像素点为局部最大值;否则将该当前像素点的灰度值置0;剔除平滑图像中所有灰度值为0的像素点后,则组成非极大值抑制图像;Select each pixel in the smooth image in turn as the current pixel, if the amplitude of the current pixel is greater than the amplitude of the two adjacent pixels in the gradient direction, then the current pixel is a local maximum; otherwise, it will be The gray value of the current pixel is set to 0; after removing all pixels with a gray value of 0 in the smooth image, a non-maximum suppressed image is formed;

步骤33,设定两个阈值L和H,其中L=1/2H,依次任选非极大值抑制图像中的一个像素点作为当前像素点,若该当前像素点的幅值大于或等于L,则该当前像素点为低阈值局部最大值点,否则将该当前像素点的灰度值置0;若该当前像素点的幅值大于或等于H,则该当前像素点为高阈值局部最大值点,否则将该当前像素点的灰度值置0;Step 33, set two thresholds L and H, where L=1/2H, select a pixel in the non-maximum suppression image in turn as the current pixel, if the amplitude of the current pixel is greater than or equal to L , then the current pixel is a low threshold local maximum point, otherwise the gray value of the current pixel is set to 0; if the amplitude of the current pixel is greater than or equal to H, then the current pixel is a high threshold local maximum value point, otherwise the gray value of the current pixel point is set to 0;

步骤34,所有低阈值局部最大值点组成低阈值边缘图像;所有高阈值局部最大值点组成高阈值边缘图像;Step 34, all low-threshold local maximum points form a low-threshold edge image; all high-threshold local maximum points form a high-threshold edge image;

步骤35,若高阈值边缘图像的边缘出现断点,则查找该断点坐标对应于低阈值边缘图像中的像素点,寻找该像素点的八邻域点中能够连接高阈值边缘图像断点的像素点,将该像素点连接至高阈值边缘图像的断点处;Step 35, if there is a breakpoint on the edge of the high-threshold edge image, then find the coordinates of the breakpoint corresponding to the pixel in the low-threshold edge image, and find the eight neighborhood points of the pixel that can connect the breakpoint of the high-threshold edge image A pixel point, which is connected to the breakpoint of the high threshold edge image;

步骤36,重复步骤35直至高阈值边缘图像的边缘闭合,此时得到的高阈值边缘图像为优化后的人眼区域。Step 36, repeat step 35 until the edge of the high-threshold edge image is closed, and the obtained high-threshold edge image at this time is the optimized human eye area.

进一步地,步骤3中所述的在优化后的人眼区域中检测虹膜是指:Further, detecting iris in the optimized human eye area described in step 3 refers to:

步骤37,利用步骤36得到的人眼区域边缘的上下左右四个方向的极点,采用最小外接矩形法估算出人眼区域的圆心和半径,从而得到该人眼区域的参数方程;Step 37, using the poles in the four directions of the upper, lower, left, and right directions of the edge of the human eye area obtained in step 36, using the minimum circumscribed rectangle method to estimate the center and radius of the human eye area, thereby obtaining the parameter equation of the human eye area;

步骤38,对所述的参数方程在半径范围内进行Hough变换得到一个变换空间,该变换空间包含若干个以R为半径的圆;Step 38, carry out Hough transform to described parametric equation within the range of radius to obtain a transform space, this transform space includes several circles with R as radius;

步骤39,任选所述变换空间中的一个圆作为当前圆,遍历变换空间中的所有圆,统计与该当前圆圆心坐标相同的圆的个数记为相同圆心数,并标记该当前圆;Step 39, selecting a circle in the transformation space as the current circle, traversing all the circles in the transformation space, counting the number of circles with the same coordinates as the center of the current circle as the number of the same center, and marking the current circle;

步骤310,重复步骤39,直至所述变换空间中所有圆都被标记为当前圆;Step 310, repeat step 39 until all circles in the transformation space are marked as current circles;

步骤311,找到相同圆心数最多的当前圆,该当前圆的圆心坐标即为虹膜坐标。Step 311 , find the current circle with the largest number of identical centers, and the coordinates of the centers of the current circle are the iris coordinates.

本发明还设计了一种裸眼3D汽车仪表显示装置,所述装置包括显示面板和柱状透镜,其特征在于,还包括一微处理器,所述的显示面板和柱状透镜之间设置有液压调节装置,所述的液压调节装置与微处理器相连接。The present invention also designs a naked-eye 3D automotive instrument display device, which includes a display panel and a lenticular lens, and is characterized in that it also includes a microprocessor, and a hydraulic adjustment device is arranged between the display panel and the lenticular lens , the hydraulic regulating device is connected with the microprocessor.

进一步地,所述的液压调节装置设置有4个,其采用双回路对角线式的布置方式分布在显示面板底面的四个角上,并与柱状透镜连接。Further, there are four hydraulic adjustment devices, which are distributed on the four corners of the bottom surface of the display panel in a double-circuit diagonal arrangement and connected with the lenticular lens.

进一步地,所述的显示面板和柱状透镜通过环形弹性连接件连接。Further, the display panel and the lenticular lens are connected through an annular elastic connecting piece.

与现有技术相比,本发明具有如下技术效果:Compared with the prior art, the present invention has the following technical effects:

1.本发明能够自动准确地检测出人眼,并给出人眼的空间位置,从而根据人眼的空间位置对裸眼3D汽车仪表显示装置进行调整,使得当前人眼位置处在最佳观看区域。1. The present invention can automatically and accurately detect the human eye, and give the spatial position of the human eye, so as to adjust the naked-eye 3D car instrument display device according to the spatial position of the human eye, so that the current human eye position is in the best viewing area .

2.本发明处理速度快、辨识精度高。2. The present invention has fast processing speed and high identification accuracy.

附图说明Description of drawings

图1是本发明一种可人眼跟踪的裸眼3D汽车仪表显示方法及系统的流程图;Fig. 1 is a flow chart of a naked-eye 3D automobile instrument display method and system that can be tracked by human eyes according to the present invention;

图2是确定人眼位置的原理图;Fig. 2 is a schematic diagram of determining the position of human eyes;

图3是处理图像的一维变换图;Fig. 3 is a one-dimensional transformation diagram of processing images;

图4是二值化图像;Fig. 4 is a binary image;

图5是人脸范围提取图;Figure 5 is a face range extraction diagram;

图6是人脸的水平积分投影图;Fig. 6 is the horizontal integral projection diagram of human face;

图7是人眼提取图;Figure 7 is a human eye extraction diagram;

图8是人眼识别效果图;Fig. 8 is an effect diagram of human eye recognition;

图9是柱状透镜式裸眼3D显示装置示意图;9 is a schematic diagram of a lenticular lens-type naked-eye 3D display device;

图10是本发明结构示意图;Fig. 10 is a schematic structural diagram of the present invention;

图11是液压装置管路布置示意图;Figure 11 is a schematic diagram of the pipeline layout of the hydraulic device;

图中标号代表:1—显示面板,2—弹性连接件,3—液压调节装置,4—柱状透镜。The symbols in the figure represent: 1—display panel, 2—elastic connector, 3—hydraulic adjustment device, 4—cylindrical lens.

具体实施方式detailed description

下面结合附图和实施例对本发明作进一步说明。The present invention will be further described below in conjunction with drawings and embodiments.

一种可人眼跟踪的裸眼3D汽车仪表显示方法,该方法利用安装在车辆驾驶舱前端的双目摄像头采集图像以实现可人眼跟踪的汽车仪表显示,具体包括以下步骤:A naked-eye 3D car meter display method that can be tracked by human eyes, the method utilizes a binocular camera installed at the front end of the vehicle cockpit to collect images to realize the car meter display that can be tracked by human eyes, and specifically includes the following steps:

步骤1,安装双目摄像头,标定双目摄像头的位置,利用双目摄像头采集同一时刻的左右两帧图像,分别矫正左右两帧图像以匹配图像;Step 1, install the binocular camera, calibrate the position of the binocular camera, use the binocular camera to collect two left and right frames of images at the same time, and correct the left and right two frames of images to match the images;

如图2所示的双目定位原理图,其具体实施步骤为:The schematic diagram of binocular positioning shown in Figure 2, its specific implementation steps are:

步骤11,通过标定来测量两个摄像头之间的相对位置,即右摄像头相对于左摄像头的三维平移t和旋转R参数,双目摄像头C1和C2与世界坐标系相对位置的外部参数为旋转矩阵R1和R2和平移向量t1和t2,双目摄像头与世界坐标系的相对位置可表示为:Step 11, measure the relative position between the two cameras through calibration, that is, the three-dimensional translation t and rotation R parameters of the right camera relative to the left camera, and the external parameters of the relative positions of the binocular cameras C1 and C2 to the world coordinate system are rotation matrices R1 and R2 and translation vectors t1 and t2, the relative position of the binocular camera and the world coordinate system can be expressed as:

zc1=R1zw+t1 zc2=R2zw+t2 z c1 =R 1 z w +t 1 z c2 =R 2 z w +t 2

得到双目摄像头的位置关系:Get the positional relationship of the binocular camera:

zc1=R1R2 -1zc2+t1-R1R2 -1t2z c1 =R 1 R 2 −1 z c2 +t 1 −R 1 R 2 −1 t 2 .

从而两个摄像机之间的几何关系可用R和t表示:Thus the geometric relationship between the two cameras can be represented by R and t:

R=R1R2 -1 t=t1-R1R2 -1t2 R=R 1 R 2 -1 t=t 1 -R 1 R 2 -1 t 2

步骤12,计算目标点在左右两个视图上形成的视差,首先得把该点在左右两视图上对应的像素点匹配起来,然而在二维空间内匹配对应点非常耗时,所以为了减少匹配搜索范围,引进了极限约束使得对应点的匹配由二维变为一维搜索,如图3所示。Step 12, calculate the parallax formed by the target point on the left and right views. First, match the corresponding pixel points of the point on the left and right views. However, matching corresponding points in two-dimensional space is very time-consuming, so in order to reduce the matching For the search range, limit constraints are introduced to change the matching of corresponding points from two-dimensional to one-dimensional search, as shown in Figure 3.

步骤2,从矫正后的图像中提取人脸范围区域;Step 2, extracting the range region of the face from the rectified image;

其中,步骤2中提取人脸范围区域的具体方法为:Wherein, the specific method of extracting the range region of the human face in step 2 is:

步骤21,对矫正后的图像进行二值化处理,将图像转化为黑点和白点组成的二值化图像;Step 21, performing binarization processing on the corrected image, converting the image into a binary image composed of black points and white points;

步骤22,以二值化图像的水平方向为X轴,以黑白图像中与X轴垂直的方向为Y轴,确定人脸区域在X轴方向的起始点和结束点:Step 22, take the horizontal direction of the binarized image as the X-axis, and take the direction perpendicular to the X-axis in the black-and-white image as the Y-axis to determine the starting point and the end point of the face area in the X-axis direction:

步骤221,设定X=b,b=0;Step 221, setting X=b, b=0;

步骤222,获取X=b时,二值化图像中白点的个数记为SumTempx和所有点的个数Sum_p_c;设定阈值pLThresh1,若SumTempx和Sum_p_c的比值大于或等于pLThresh1,则X=b作为人脸区域在X轴方向的起始点,即为人脸的左边界X坐标,记为x_L;Step 222, when obtaining X=b, the number of white points in the binarized image is recorded as SumTempx and the number of all points Sum_p_c; set the threshold pLThresh1, if the ratio of SumTempx to Sum_p_c is greater than or equal to pLThresh1, then X=b As the starting point of the face area in the X-axis direction, it is the X coordinate of the left boundary of the face, denoted as x_L;

若SumTempx和Sum_p_c的比值小于pLThresh1,则对b加上步长x_p,重复步骤(2-2-2),直到找到人脸的左边界X坐标;If the ratio of SumTempx to Sum_p_c is less than pLThresh1, then add step size x_p to b, and repeat step (2-2-2), until the left boundary X coordinate of the face is found;

步骤223,设定X=b,b=0;Step 223, setting X=b, b=0;

步骤224,获取X=b时,二值化图像中白点的个数记为SumTempx和所有点的个数Sum_p_c;设定阈值pLThresh1,若SumTempx和Sum_p_c的比值大于或等于pLThresh1,则X=b作为人脸区域在X轴方向的起始点,即为人脸的右边界X坐标,记为x_R;Step 224, when obtaining X=b, the number of white points in the binarized image is recorded as SumTempx and the number of all points Sum_p_c; set the threshold pLThresh1, if the ratio of SumTempx to Sum_p_c is greater than or equal to pLThresh1, then X=b As the starting point of the face area in the X-axis direction, it is the X coordinate of the right boundary of the face, denoted as x_R;

若SumTempx和Sum_p_c的比值小于pLThresh1,则对b加上步长-x_p,重复步骤(2-2-4),直到找到人脸的右边界X坐标;If the ratio of SumTempx to Sum_p_c is less than pLThresh1, then add the step size -x_p to b, and repeat steps (2-2-4), until the right boundary X coordinate of the face is found;

步骤23,确定人脸区域在Y轴方向的起始点和结束点:Step 23, determine the start point and end point of the face area in the Y-axis direction:

步骤231,设定Y=c,c=0;Step 231, setting Y=c, c=0;

步骤232,获取Y=c时,二值化图像中白点的个数记为SumTempy和所有点的个数Sum_p_r;设定阈值pLThresh2,若SumTempy和Sum_p_r的比值大于或等于pLThresh2,则Y=c作为人脸区域在Y轴方向的起始点,即为人脸的左边界Y坐标,记为y_U;Step 232, when obtaining Y=c, the number of white points in the binarized image is recorded as SumTempy and the number Sum_p_r of all points; set the threshold pLThresh2, if the ratio of SumTempy and Sum_p_r is greater than or equal to pLThresh2, then Y=c As the starting point of the face area in the Y-axis direction, it is the Y coordinate of the left boundary of the face, denoted as y_U;

若SumTempy和Sum_p_r的比值小于pLThresh2,则对c加上步长y_p,重复步骤232,直到找到人脸的上边界Y坐标;If the ratio of SumTempy and Sum_p_r is less than pLThresh2, then add step size y_p to c, repeat step 232, until finding the upper boundary Y coordinate of people's face;

步骤233,已知x_R、x_L和y_U,由符合美学标准的脸型得到,从发际线到下巴的距离与脸宽的比例约为1.36,所以人脸下边界坐标为:Step 233, x_R, x_L and y_U are known, obtained from a face conforming to aesthetic standards, the ratio of the distance from the hairline to the chin to the width of the face is about 1.36, so the coordinates of the lower boundary of the face are:

y_D=y_U-1.36×(x_R-x_L)y_D=y_U-1.36×(x_R-x_L)

由x_R、x_L、y_U和y_D即可得到框定人脸范围的矩形。From x_R, x_L, y_U and y_D, the rectangle that frames the range of the face can be obtained.

步骤24,通过步骤22得到的X轴方向的起始点和结束点、步骤23得到的Y轴方向的起始点和结束点,得到框定人脸范围的矩形Step 24, through the starting point and ending point of the X-axis direction obtained in Step 22, the starting point and the ending point of the Y-axis direction obtained in Step 23, to obtain a rectangle that frames the range of the human face

步骤3,在人脸范围区域中提取人眼区域,并对提取的人眼区域进行优化,在优化后的人眼区域中检测虹膜;Step 3, extract the human eye area in the human face range area, and optimize the extracted human eye area, and detect the iris in the optimized human eye area;

其中,步骤3中从人脸范围区域中提取人眼区域的方法为:Wherein, in step 3, the method for extracting the human eye area from the human face range area is:

其中,G(x,y)表示二值化图像在坐标(x,y)处的灰度值,Mh(x)表示二值化图像(x,y)处的灰度值在[x_L,x_R]区域的水平积分投影;Among them, G(x, y) represents the gray value of the binarized image at coordinates (x, y), and M h (x) represents the gray value of the binarized image (x, y) in [x_L, x_R] area horizontal integral projection;

通过公式1得到的二值化图像的水平投影曲线,如图6所示,曲线中的较明显波谷是同人脸的各器官特征相对应的,如从左往右观察,最为明显的是四个波谷,第一个对应眉毛,第二个对应眼睛,第三个应该是鼻子,第四个对应嘴巴,因此需要在此曲线中定位第二和第三个波峰点,找到这两个波峰点在二值化图像中对应的Y轴坐标k_1和k_2;The horizontal projection curve of the binarized image obtained by formula 1 is shown in Figure 6. The obvious troughs in the curve correspond to the characteristics of the various organs of the face. If viewed from left to right, the most obvious ones are the four The trough, the first corresponds to the eyebrows, the second corresponds to the eyes, the third should be the nose, and the fourth corresponds to the mouth, so you need to locate the second and third peak points in this curve, find the two peak points in The corresponding Y-axis coordinates k_1 and k_2 in the binarized image;

令y_1=k_2-3/5(k_2-k_1)和y_2=k_2+3/5(k_2-k_1),得到y_1和y_2组成的人眼区域,如图7所示。Set y_1=k_2-3/5(k_2-k_1) and y_2=k_2+3/5(k_2-k_1), to obtain the human eye area composed of y_1 and y_2, as shown in FIG. 7 .

其中,优化人眼区域的方法为:Among them, the method of optimizing the human eye area is:

步骤31,利用高斯滤波滤除人眼区域图像中的噪声,即平滑处理;Step 31, using Gaussian filtering to filter out the noise in the image of the human eye area, that is, smoothing;

步骤32,对平滑处理后的图像中的像素进行梯度幅值和方向的计算,再进行极大值抑制,得到非极大值抑制图像:Step 32, calculate the gradient magnitude and direction of the pixels in the smoothed image, and then perform maximum value suppression to obtain a non-maximum value suppressed image:

依次选取平滑处理后图像中的每一个像素点作为当前像素点,若当前像素点的幅值大于其梯度方向上相邻的两个像素点的幅值,则该当前像素点为局部最大值;否则将该当前像素点的灰度值置0;剔除平滑图像中所有灰度值为0的像素点后,组成非极大值抑制图像;Select each pixel in the smoothed image in turn as the current pixel, if the magnitude of the current pixel is greater than the magnitude of the two adjacent pixels in the gradient direction, then the current pixel is a local maximum; Otherwise, the gray value of the current pixel is set to 0; after removing all pixels with a gray value of 0 in the smooth image, a non-maximum value suppressed image is formed;

步骤33,设定两个阈值L和H,其中L=1/2H,依次任选非极大值抑制图像中的一个像素点作为当前像素点,若该当前像素点的幅值大于或等于L,则该当前像素点为低阈值局部最大值点,否则将该当前像素点的灰度值置0;若该当前像素点的幅值大于或等于H,则该当前像素点为高阈值局部最大值点,否则将该当前像素点的灰度值置0;Step 33, set two thresholds L and H, where L=1/2H, select a pixel in the non-maximum suppression image in turn as the current pixel, if the amplitude of the current pixel is greater than or equal to L , then the current pixel is a low threshold local maximum point, otherwise the gray value of the current pixel is set to 0; if the amplitude of the current pixel is greater than or equal to H, then the current pixel is a high threshold local maximum value point, otherwise the gray value of the current pixel point is set to 0;

步骤34,所有低阈值局部最大值点组成低阈值边缘图像;所有高阈值局部最大值点组成高阈值边缘图像;Step 34, all low-threshold local maximum points form a low-threshold edge image; all high-threshold local maximum points form a high-threshold edge image;

步骤35,若高阈值边缘图像的边缘出现断点,则查找该断点坐标对应于低阈值边缘图像中的像素点,寻找该像素点的八邻域点中能够连接高阈值边缘图像断点的像素点,将该像素点连接至高阈值边缘图像的断点处;Step 35, if there is a breakpoint on the edge of the high-threshold edge image, then find the coordinates of the breakpoint corresponding to the pixel in the low-threshold edge image, and find the eight neighborhood points of the pixel that can connect the breakpoint of the high-threshold edge image A pixel point, which is connected to the breakpoint of the high threshold edge image;

步骤36,重复步骤35直至高阈值边缘图像的边缘闭合,此时的高阈值边缘图像即为优化后的人眼区域。Step 36, repeat step 35 until the edge of the high-threshold edge image is closed, and the high-threshold edge image at this time is the optimized human eye area.

其中,检测虹膜的具体方法为:Wherein, the concrete method of detecting iris is:

步骤37,利用步骤36得到的人眼区域边缘的上下左右四个方向的极点,采用最小外接矩形法估算出人眼区域的圆心和半径,从而得到该人眼区域的参数方程;Step 37, using the poles in the four directions of the upper, lower, left, and right directions of the edge of the human eye area obtained in step 36, using the minimum circumscribed rectangle method to estimate the center and radius of the human eye area, thereby obtaining the parameter equation of the human eye area;

步骤38,对所述的参数方程在半径范围内进行Hough变换得到一个变换空间,该变换空间包含若干个以R为半径的圆;Step 38, carry out Hough transform to described parametric equation within the range of radius to obtain a transform space, this transform space includes several circles with R as radius;

步骤39,任选所述变换空间中的一个圆作为当前圆,遍历变换空间中的所有圆,统计与该当前圆圆心坐标相同的圆的个数记为相同圆心数,并标记该当前圆;Step 39, selecting a circle in the transformation space as the current circle, traversing all the circles in the transformation space, counting the number of circles with the same coordinates as the center of the current circle as the number of the same center, and marking the current circle;

步骤310,重复步骤39,直至所述变换空间中所有圆都被标记为当前圆;Step 310, repeat step 39 until all circles in the transformation space are marked as current circles;

步骤311,找到相同圆心数最多的当前圆,该当前圆的圆心坐标即为虹膜坐标。因为当变换空间的某些坐标点相等时,代表这些点在同一个圆上,坐标点相同个数出现峰值时所对应的坐标点即为圆的参数,从而得到虹膜。Step 311 , find the current circle with the largest number of identical centers, and the coordinates of the centers of the current circle are the iris coordinates. Because when some coordinate points in the transformation space are equal, it means that these points are on the same circle, and the corresponding coordinate point when the same number of coordinate points has a peak value is the parameter of the circle, thus obtaining the iris.

步骤4,计算虹膜相对摄像头所在的空间位置,利用摄像头和仪表显示屏的位置关系得到人眼相对仪表显示屏的空间位置。Step 4, calculate the spatial position of the iris relative to the camera, and use the positional relationship between the camera and the instrument display to obtain the spatial position of the human eye relative to the instrument display.

本发明还设计了一种裸眼3D汽车仪表显示装置,该装置包括显示面板1、柱状透镜4和微处理器,显示面板1和柱状透镜4之间设置有液压调节装置3,该液压调节装置3通过微处理器控制来改变显示面板1和柱状透镜4之间的距离,显示面板1和柱状透镜4之间需要调节的距离根据人眼位置来计算得到,满足驾驶者在任何位置都能观看到完美的裸眼3D显示效果。The present invention also designs a naked-eye 3D automotive instrument display device, which includes a display panel 1, a lenticular lens 4 and a microprocessor, and a hydraulic adjustment device 3 is arranged between the display panel 1 and the lenticular lens 4, and the hydraulic adjustment device 3 The distance between the display panel 1 and the lenticular lens 4 is changed through the control of the microprocessor. The distance to be adjusted between the display panel 1 and the lenticular lens 4 is calculated according to the position of the human eye, so that the driver can watch it at any position. Perfect glasses-free 3D display effect.

如图10所示,液压调节装置设置有4个,其对称分布在显示面板1底面的四个角上,并与柱状透镜4连接,为了便于调节显示面板1和柱状透镜4之间的距离,显示面板1和柱状透镜4通过环形弹性连接件2连接。As shown in Figure 10, there are four hydraulic adjustment devices, which are symmetrically distributed on the four corners of the bottom surface of the display panel 1 and connected to the lenticular lens 4. In order to facilitate the adjustment of the distance between the display panel 1 and the lenticular lens 4, The display panel 1 and the lenticular lens 4 are connected through an annular elastic connecting piece 2 .

如图11所示,4个液压调节装置3采用的是双回路对角线式的布置方式,能够有效的保证显示面板1和柱状透镜4之间的平行度,并且可靠性高。As shown in FIG. 11 , the four hydraulic adjustment devices 3 adopt a double-circuit diagonal arrangement, which can effectively ensure the parallelism between the display panel 1 and the lenticular lens 4 with high reliability.

以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Those of ordinary skill in the relevant technical field can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the category of the present invention, and the scope of patent protection of the present invention should be defined by the claims.

Claims (10)

1.一种可人眼跟踪的裸眼3D汽车仪表显示方法,其特征在于,具体包括以下步骤:1. A naked-eye 3D automobile instrument display method that can be tracked by human eyes, is characterized in that, specifically comprises the following steps: 步骤1,在车辆驾驶舱前端安装双目摄像头,标定双目摄像头的位置;利用双目摄像头采集同一时刻的左、右两帧图像,分别矫正左、右两帧图像,得到矫正后的图像;Step 1, install a binocular camera at the front of the vehicle cockpit, and calibrate the position of the binocular camera; use the binocular camera to collect two frames of left and right images at the same time, and correct the two frames of left and right images respectively to obtain the corrected image; 步骤2,从矫正后的图像中提取人脸范围区域;Step 2, extracting the range region of the face from the rectified image; 步骤3,在人脸范围区域中提取人眼区域,并对提取的人眼区域进行优化,在优化后的人眼区域中检测虹膜;Step 3, extract the human eye area in the human face range area, and optimize the extracted human eye area, and detect the iris in the optimized human eye area; 步骤4,计算虹膜相对于摄像头所在的空间位置,利用摄像头和仪表显示屏的位置关系以及虹膜相对于摄像头所在的空间位置,计算得到人眼相对仪表显示屏的空间位置。Step 4, calculate the spatial position of the iris relative to the camera, and calculate the spatial position of the human eye relative to the instrument display by using the positional relationship between the camera and the instrument display and the spatial position of the iris relative to the camera. 2.如权利要求1所述的一种可人眼跟踪的裸眼3D汽车仪表显示方法,其特征在于,步骤2包括如下子步骤:2. A kind of naked-eye 3D automobile meter display method capable of human eye tracking as claimed in claim 1, wherein step 2 comprises the following sub-steps: 步骤21,对矫正后的图像进行二值化处理,得到二值化图像;Step 21, performing binarization processing on the corrected image to obtain a binarized image; 步骤22,以二值化图像的水平方向为X轴,以二值化图像中与X轴垂直的方向为Y轴,确定人脸区域在X轴方向的起始点和结束点;Step 22, take the horizontal direction of the binarized image as the X-axis, and take the direction perpendicular to the X-axis in the binarized image as the Y-axis to determine the starting point and the end point of the face area in the X-axis direction; 步骤23,确定人脸区域在Y轴方向的起始点和结束点;Step 23, determine the start point and end point of the face area in the Y-axis direction; 步骤24,通过X轴方向的起始点和结束点、Y轴方向的起始点和结束点,得到框定人脸范围的矩形。Step 24: Obtain a rectangle that frames the range of the human face through the start point and end point in the X-axis direction and the start point and end point in the Y-axis direction. 3.如权利要求2所述的一种可人眼跟踪的裸眼3D汽车仪表显示方法,其特征在于,步骤22中所述的确定人脸区域在X轴方向的起 始点和结束点的具体步骤为:3. A kind of naked-eye 3D car instrument display method capable of human eye tracking as claimed in claim 2, characterized in that, the specific steps of determining the starting point and the end point of the face area in the X-axis direction described in step 22 for: 步骤221,设定X=b,b=0;Step 221, setting X=b, b=0; 步骤222,获取X=b时,二值化图像中白点的个数记为SumTempx和所有点的个数Sum_p_c;设定阈值pLThresh1,若SumTempx和Sum_p_c的比值大于或等于pLThresh1,则X=b作为人脸区域在X轴方向的起始点,即为人脸的左边界X坐标,记为x_L;Step 222, when obtaining X=b, the number of white points in the binarized image is recorded as SumTempx and the number of all points Sum_p_c; set the threshold pLThresh1, if the ratio of SumTempx to Sum_p_c is greater than or equal to pLThresh1, then X=b As the starting point of the face area in the X-axis direction, it is the X coordinate of the left boundary of the face, denoted as x_L; 若SumTempx和Sum_p_c的比值小于pLThresh1,则对b加上步长x_p,重复步骤222,直到找到人脸的左边界X坐标;If the ratio of SumTempx and Sum_p_c is less than pLThresh1, then add step size x_p to b, and repeat step 222 until the left boundary X coordinate of the face is found; 步骤223,设定X=b,b=0;Step 223, setting X=b, b=0; 步骤224,获取X=b时,二值化图像中白点的个数记为SumTempx和所有点的个数Sum_p_c;设定阈值pLThresh1,若SumTempx和Sum_p_c的比值大于或等于pLThresh1,则X=b作为人脸区域在X轴方向的起始点,即为人脸的右边界X坐标,记为x_R;Step 224, when obtaining X=b, the number of white points in the binarized image is recorded as SumTempx and the number of all points Sum_p_c; set the threshold pLThresh1, if the ratio of SumTempx to Sum_p_c is greater than or equal to pLThresh1, then X=b As the starting point of the face area in the X-axis direction, it is the X coordinate of the right boundary of the face, denoted as x_R; 若SumTempx和Sum_p_c的比值小于pLThresh1,则对b加上步长-x_p,重复步骤224,直到找到人脸的右边界X坐标。If the ratio of SumTempx to Sum_p_c is less than pLThresh1, then add step-x_p to b, and repeat step 224 until the right boundary X coordinate of the face is found. 4.如权利要求3所述的一种可人眼跟踪的裸眼3D汽车仪表显示方法,其特征在于,步骤23中所述的确定人脸区域在Y轴方向的起始点和结束点的具体步骤为:4. A kind of naked-eye 3D automobile meter display method capable of human eye tracking as claimed in claim 3, characterized in that, the specific steps of determining the starting point and the ending point of the human face area in the Y-axis direction described in step 23 for: 步骤231,设定Y=c,c=0;Step 231, setting Y=c, c=0; 步骤232,获取Y=c时,二值化图像中白点的个数记为SumTempy和所有点的个数Sum_p_r;设定阈值pLThresh2,若SumTempy和Sum_p_r的比值大于或等于pLThresh2,则Y=c作为人脸区域在Y轴 方向的起始点,即为人脸的左边界Y坐标,记为y_U;Step 232, when obtaining Y=c, the number of white points in the binarized image is recorded as SumTempy and the number Sum_p_r of all points; set the threshold pLThresh2, if the ratio of SumTempy and Sum_p_r is greater than or equal to pLThresh2, then Y=c As the starting point of the face area in the Y-axis direction, it is the Y coordinate of the left boundary of the face, denoted as y_U; 若SumTempy和Sum_p_r的比值小于pLThresh2,则对c加上步长y_p,重复步骤232,直到找到人脸的上边界Y坐标;If the ratio of SumTempy and Sum_p_r is less than pLThresh2, then add step size y_p to c, repeat step 232, until finding the upper boundary Y coordinate of people's face; 步骤233,已知x_R、x_L和y_U,由下式可得到人脸下边界坐标:Step 233, given x_R, x_L and y_U, the lower boundary coordinates of the face can be obtained by the following formula: y_D=y_U-1.36×(x_R-x_L) 。y_D=y_U-1.36×(x_R-x_L). 5.如权利要求4所述的一种可人眼跟踪的裸眼3D汽车仪表显示方法,其特征在于,步骤3中从人脸范围区域中提取人眼区域的方法为:5. a kind of naked-eye 3D automobile meter display method that can be tracked by human eyes as claimed in claim 4, is characterized in that, the method for extracting the human eye area from the human face range area in step 3 is: 其中,G(x,y)表示二值化图像中坐标(x,y)处的灰度值,Mh(x)表示二值化图像中坐标(x,y)处的灰度值在[x_L,x_R]区域的水平积分投影曲线;Among them, G(x, y) represents the gray value at coordinates (x, y) in the binarized image, and M h (x) represents the gray value at coordinates (x, y) in the binarized image in [ The horizontal integral projection curve of x_L, x_R] area; 找到所述水平积分投影曲线中与人眼相对应的波谷,利用与该波谷相邻的两个波峰点找到这两个波峰点对应的Y轴坐标k_1和k_2;Find the trough corresponding to the human eye in the horizontal integral projection curve, and use the two peak points adjacent to the trough to find the Y-axis coordinates k_1 and k_2 corresponding to the two peak points; 令y_1=k_2-3/5(k_2-k_1)、y_2=k_2+3/5(k_2-k_1),得到y_1和y_2组成的人眼区域。Set y_1=k_2-3/5(k_2-k_1), y_2=k_2+3/5(k_2-k_1), to obtain the human eye area composed of y_1 and y_2. 6.如权利要求1所述的一种可人眼跟踪的裸眼3D汽车仪表显示方法,其特征在于,步骤3中所述的对提取的人眼区域进行优化是指:6. A kind of naked-eye 3D car meter display method capable of human eye tracking as claimed in claim 1, wherein said optimizing the extracted human eye region in step 3 refers to: 步骤31,利用高斯滤波对人眼区域进行处理,得到平滑图像;Step 31, using Gaussian filtering to process the human eye area to obtain a smooth image; 步骤32,对平滑图像中的像素进行梯度幅值和方向的计算,再进行极大值抑制,得到非极大值抑制图像:具体操作如下:Step 32, calculate the gradient magnitude and direction of the pixels in the smooth image, and then perform maximum value suppression to obtain a non-maximum value suppression image: the specific operation is as follows: 依次选取平滑图像中的每个像素点作为当前像素点,若当前像素点的幅值大于其梯度方向上相邻的两个像素点的幅值,则该当前像素点为局部最大值;否则将该当前像素点的灰度值置0;剔除平滑图像中所有灰度值为0的像素点后,则组成非极大值抑制图像;Select each pixel in the smooth image in turn as the current pixel, if the amplitude of the current pixel is greater than the amplitude of the two adjacent pixels in the gradient direction, then the current pixel is a local maximum; otherwise, it will be The gray value of the current pixel is set to 0; after removing all pixels with a gray value of 0 in the smooth image, a non-maximum suppressed image is formed; 步骤33,设定两个阈值L和H,其中L=1/2H,依次任选非极大值抑制图像中的一个像素点作为当前像素点,若该当前像素点的幅值大于或等于L,则该当前像素点为低阈值局部最大值点,否则将该当前像素点的灰度值置0;若该当前像素点的幅值大于或等于H,则该当前像素点为高阈值局部最大值点,否则将该当前像素点的灰度值置0;Step 33, set two thresholds L and H, where L=1/2H, select a pixel in the non-maximum suppression image in turn as the current pixel, if the amplitude of the current pixel is greater than or equal to L , then the current pixel is a low threshold local maximum point, otherwise the gray value of the current pixel is set to 0; if the amplitude of the current pixel is greater than or equal to H, then the current pixel is a high threshold local maximum value point, otherwise the gray value of the current pixel point is set to 0; 步骤34,所有低阈值局部最大值点组成低阈值边缘图像;所有高阈值局部最大值点组成高阈值边缘图像;Step 34, all low-threshold local maximum points form a low-threshold edge image; all high-threshold local maximum points form a high-threshold edge image; 步骤35,若高阈值边缘图像的边缘出现断点,则查找该断点坐标对应于低阈值边缘图像中的像素点,寻找该像素点的八邻域点中能够连接高阈值边缘图像断点的像素点,将该像素点连接至高阈值边缘图像的断点处;Step 35, if there is a breakpoint on the edge of the high-threshold edge image, then find the coordinates of the breakpoint corresponding to the pixel in the low-threshold edge image, and find the eight neighborhood points of the pixel that can connect the breakpoint of the high-threshold edge image A pixel point, which is connected to the breakpoint of the high threshold edge image; 步骤36,重复步骤35直至高阈值边缘图像的边缘闭合,此时得到的高阈值边缘图像为优化后的人眼区域。Step 36, repeat step 35 until the edge of the high-threshold edge image is closed, and the obtained high-threshold edge image at this time is the optimized human eye area. 7.如权利要求6所述的一种可人眼跟踪的裸眼3D汽车仪表显示方法及系统,其特征在于,步骤3中所述的在优化后的人眼区域中检测虹膜是指:7. A kind of naked-eye 3D automobile meter display method and system capable of human eye tracking as claimed in claim 6, wherein the detection of iris in the optimized human eye area described in step 3 refers to: 步骤37,利用步骤36得到的人眼区域边缘的上下左右四个方向的极点,采用最小外接矩形法估算出人眼区域的圆心和半径,从而得到该人眼区域的参数方程;Step 37, using the poles in the four directions of the upper, lower, left, and right directions of the edge of the human eye area obtained in step 36, using the minimum circumscribed rectangle method to estimate the center and radius of the human eye area, thereby obtaining the parameter equation of the human eye area; 步骤38,对所述的参数方程在半径范围内进行Hough变换得到一个变换空间,该变换空间包含若干个以R为半径的圆;Step 38, carry out Hough transform to described parametric equation within the range of radius to obtain a transform space, this transform space includes several circles with R as radius; 步骤39,任选所述变换空间中的一个圆作为当前圆,遍历变换空间中的所有圆,统计与该当前圆圆心坐标相同的圆的个数记为相同圆心数,并标记该当前圆;Step 39, selecting a circle in the transformation space as the current circle, traversing all the circles in the transformation space, counting the number of circles with the same coordinates as the center of the current circle as the number of the same center, and marking the current circle; 步骤310,重复步骤39,直至所述变换空间中所有圆都被标记为当前圆;Step 310, repeat step 39 until all circles in the transformation space are marked as current circles; 步骤311,找到相同圆心数最多的当前圆,该当前圆的圆心坐标即为虹膜坐标。Step 311 , find the current circle with the largest number of identical centers, and the coordinates of the centers of the current circle are the iris coordinates. 8.一种裸眼3D汽车仪表显示装置,所述装置包括显示面板(1)和柱状透镜(4),其特征在于,还包括一微处理器,所述的显示面板(1)和柱状透镜(4)之间设置有液压调节装置(3),所述的液压调节装置(3)与微处理器相连接。8. A naked-eye 3D automotive instrument display device, said device comprising a display panel (1) and a lenticular lens (4), is characterized in that, also includes a microprocessor, said display panel (1) and lenticular lens ( 4) A hydraulic regulating device (3) is arranged between them, and the hydraulic regulating device (3) is connected with a microprocessor. 9.如权利要求8所述的裸眼3D汽车仪表显示装置,其特征在于,所述的液压调节装置(3)设置有4个,其采用双回路对角线式的布置方式分布在显示面板(1)底面的四个角上,并与柱状透镜(4)连接。9. The naked-eye 3D automotive instrument display device as claimed in claim 8, characterized in that, said hydraulic adjustment device (3) is provided with four, and it adopts a double-circuit diagonal arrangement to be distributed on the display panel ( 1) On the four corners of the bottom surface, and connected with the lenticular lens (4). 10.如权利要求8所述的裸眼3D汽车仪表显示装置,所述的显示面板(1)和柱状透镜(4)通过环形弹性连接件(2)连接。10. The naked-eye 3D automotive instrument display device according to claim 8, wherein the display panel (1) and the lenticular lens (4) are connected by an annular elastic connector (2).
CN201610575172.0A 2016-07-20 2016-07-20 A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device Pending CN106218409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610575172.0A CN106218409A (en) 2016-07-20 2016-07-20 A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610575172.0A CN106218409A (en) 2016-07-20 2016-07-20 A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device

Publications (1)

Publication Number Publication Date
CN106218409A true CN106218409A (en) 2016-12-14

Family

ID=57532056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610575172.0A Pending CN106218409A (en) 2016-07-20 2016-07-20 A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device

Country Status (1)

Country Link
CN (1) CN106218409A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067013A (en) * 2017-04-25 2017-08-18 航天科技控股集团股份有限公司 One kind is based on many instrument hawkeye fuzzy detection system and methods
CN107833263A (en) * 2017-11-01 2018-03-23 宁波视睿迪光电有限公司 Feature tracking method and device
CN109309828A (en) * 2017-07-28 2019-02-05 三星电子株式会社 Image processing method and image processing device
CN109963140A (en) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 Nakedness-yet stereoscopic display method and device, equipment and computer readable storage medium
CN109961473A (en) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 Eyes localization method and device, electronic equipment and computer readable storage medium
CN111985303A (en) * 2020-07-01 2020-11-24 江西拓世智能科技有限公司 Human face recognition and human eye light spot living body detection device and method
CN112650495A (en) * 2021-01-05 2021-04-13 东风汽车股份有限公司 Method for creating visual area of display plane of combination instrument based on CATIA software
WO2021110034A1 (en) * 2019-12-05 2021-06-10 北京芯海视界三维科技有限公司 Eye positioning device and method, and 3d display device and method
CN113534490A (en) * 2021-07-29 2021-10-22 深圳市创鑫未来科技有限公司 Stereoscopic display device and stereoscopic display method based on user eyeball tracking
CN117092830A (en) * 2023-10-18 2023-11-21 世优(北京)科技有限公司 Naked eye 3D display device and driving method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104391567A (en) * 2014-09-30 2015-03-04 深圳市亿思达科技集团有限公司 Display control method for three-dimensional holographic virtual object based on human eye tracking
CN104408462A (en) * 2014-09-22 2015-03-11 广东工业大学 Quick positioning method of facial feature points
WO2015168464A1 (en) * 2014-04-30 2015-11-05 Visteon Global Technologies, Inc. System and method for calibrating alignment of a three-dimensional display within a vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015168464A1 (en) * 2014-04-30 2015-11-05 Visteon Global Technologies, Inc. System and method for calibrating alignment of a three-dimensional display within a vehicle
CN104408462A (en) * 2014-09-22 2015-03-11 广东工业大学 Quick positioning method of facial feature points
CN104391567A (en) * 2014-09-30 2015-03-04 深圳市亿思达科技集团有限公司 Display control method for three-dimensional holographic virtual object based on human eye tracking

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
向元平,王国才,乔汇东: "基于正面人脸图像的人脸轮廓的提取", 《微计算机信息》 *
周飞,王晨升: "基于Canny算法的一种边缘提取改进算法", 《北京图象图形学学会会议论文集》 *
张杰,杨晓飞,赵瑞莲: "基于积分投影和Hough变换圆检测的人眼精确定位方法研究", 《电子器件》 *
苏剑波,徐波: "《应用模式识别技术导论 人脸识别与语音识别》", 31 May 2001 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067013A (en) * 2017-04-25 2017-08-18 航天科技控股集团股份有限公司 One kind is based on many instrument hawkeye fuzzy detection system and methods
US12054047B2 (en) 2017-07-28 2024-08-06 Samsung Electronics Co., Ltd. Image processing method of generating an image based on a user viewpoint and image processing device
CN109309828A (en) * 2017-07-28 2019-02-05 三星电子株式会社 Image processing method and image processing device
CN109309828B (en) * 2017-07-28 2022-03-22 三星电子株式会社 Image processing method and image processing apparatus
US11634028B2 (en) 2017-07-28 2023-04-25 Samsung Electronics Co., Ltd. Image processing method of generating an image based on a user viewpoint and image processing device
CN107833263A (en) * 2017-11-01 2018-03-23 宁波视睿迪光电有限公司 Feature tracking method and device
CN109963140A (en) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 Nakedness-yet stereoscopic display method and device, equipment and computer readable storage medium
CN109961473A (en) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 Eyes localization method and device, electronic equipment and computer readable storage medium
WO2021110034A1 (en) * 2019-12-05 2021-06-10 北京芯海视界三维科技有限公司 Eye positioning device and method, and 3d display device and method
CN111985303A (en) * 2020-07-01 2020-11-24 江西拓世智能科技有限公司 Human face recognition and human eye light spot living body detection device and method
CN112650495A (en) * 2021-01-05 2021-04-13 东风汽车股份有限公司 Method for creating visual area of display plane of combination instrument based on CATIA software
CN112650495B (en) * 2021-01-05 2022-05-17 东风汽车股份有限公司 Method for creating visual area of display plane of combination instrument based on CATIA software
CN113534490A (en) * 2021-07-29 2021-10-22 深圳市创鑫未来科技有限公司 Stereoscopic display device and stereoscopic display method based on user eyeball tracking
CN113534490B (en) * 2021-07-29 2023-07-18 深圳市创鑫未来科技有限公司 Stereoscopic display device and stereoscopic display method based on user eyeball tracking
CN117092830A (en) * 2023-10-18 2023-11-21 世优(北京)科技有限公司 Naked eye 3D display device and driving method thereof
CN117092830B (en) * 2023-10-18 2023-12-22 世优(北京)科技有限公司 Naked eye 3D display device and driving method thereof

Similar Documents

Publication Publication Date Title
CN106218409A (en) A kind of can the bore hole 3D automobile instrument display packing of tracing of human eye and device
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN106782268B (en) Display system and driving method for display panel
CN106066696B (en) Gaze tracking method based on projection mapping correction and gaze compensation under natural light
WO2021004312A1 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN109211207B (en) Screw identification and positioning device based on machine vision
CN105517677B (en) The post-processing approach and device of depth map/disparity map
US9141873B2 (en) Apparatus for measuring three-dimensional position, method thereof, and program
CN104036488B (en) Binocular vision-based human body posture and action research method
CN106485275A (en) A kind of cover-plate glass of realizing positions, with liquid crystal display screen, the method fitted
WO2019080229A1 (en) Chess piece positioning method and system based on machine vision, storage medium, and robot
US9332247B2 (en) Image processing device, non-transitory computer readable recording medium, and image processing method
JP2015212849A (en) Image processing apparatus, image processing method, and image processing program
CN106530310A (en) Pedestrian counting method and device based on human head top recognition
KR102809045B1 (en) Method and apparatus of measuring dynamic crosstalk
CN105913013A (en) Binocular vision face recognition algorithm
CN109948400A (en) It is a kind of to be able to carry out the smart phone and its recognition methods that face characteristic 3D is identified
CN103020988A (en) Method for generating motion vector of laser speckle image
CN102842038A (en) Environment recognition device and environment recognition method
CN118052883A (en) Binocular vision-based multiple circular target space positioning method
JP2012209895A (en) Stereo image calibration method, stereo image calibration device and stereo image calibration computer program
CN110909571A (en) High-precision face recognition space positioning method
CN110099268B (en) Blind area perspective display method with natural color matching and natural fusion of display area
CN101980292A (en) Regular octagonal template-based board camera intrinsic parameter calibration method
CN107009962A (en) A kind of panorama observation procedure based on gesture recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161214

RJ01 Rejection of invention patent application after publication