CN104077568A - High-accuracy driver behavior recognition and monitoring method and system - Google Patents
High-accuracy driver behavior recognition and monitoring method and system Download PDFInfo
- Publication number
- CN104077568A CN104077568A CN201410284311.5A CN201410284311A CN104077568A CN 104077568 A CN104077568 A CN 104077568A CN 201410284311 A CN201410284311 A CN 201410284311A CN 104077568 A CN104077568 A CN 104077568A
- Authority
- CN
- China
- Prior art keywords
- real
- driving behavior
- target
- image
- tau
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种高精度的驾驶员行为识别与监控方法及系统,方法包括:通过双摄像头获取实时视频帧;对获取的实时视频帧进行目标部位检测和目标部位特征提取,从而得到实时目标部位特征点;将实时目标部位特征点与根据训练样本和目标模板得到的标准目标部位特征点进行比对,并根据比对的结果对驾驶员的驾驶行为进行识别与分析;将识别与分析的结果反馈给用户并进行实时显示。本发明的方法通过双摄像头采集的实时视频数据,解决了现有技术非实时性的不足;采用了双摄像头来采集实时视频图像,更加精确;通过目标部位检测、目标部位特征提取和特征点比对,进一步提高了识别与监控的准确度。本发明可广泛应用于智能交通与图像处理领域。
The invention discloses a high-precision driver behavior recognition and monitoring method and system. The method includes: acquiring real-time video frames through dual cameras; performing target part detection and target part feature extraction on the acquired real-time video frames, thereby obtaining real-time targets Part feature points; compare the real-time target part feature points with the standard target part feature points obtained according to the training samples and target templates, and identify and analyze the driver's driving behavior according to the comparison results; identify and analyze the Results are fed back to the user and displayed in real time. The method of the present invention solves the non-real-time deficiencies of the prior art through the real-time video data collected by dual cameras; adopts dual cameras to collect real-time video images, which is more accurate; detects target parts, extracts features of target parts and compares feature points Yes, the accuracy of identification and monitoring has been further improved. The invention can be widely used in the fields of intelligent transportation and image processing.
Description
技术领域technical field
本发明涉及智能交通与图像处理技术领域,尤其是一种高精度的驾驶员行为识别与监控方法及系统。The invention relates to the technical field of intelligent transportation and image processing, in particular to a high-precision driver behavior recognition and monitoring method and system.
背景技术Background technique
随着汽车数量的增加和公路建设规模的扩大,交通事故等问题目益明显。中国是人口最多的国家,也是道路交通事故死亡人数最高的国家,连续数年位居世界第一位。绝大部分交通事故是由于驾驶员操作失误和疲劳驾驶造成的。由于年龄、生理或心理健康状况、情绪等方面的变化,即使优秀驾驶员也不一定能长久地保持其原有的良好驾驶状态,但驾驶员本人却很难意识到这种渐进性的衰减或消退。因此,识别与监控驾驶员的驾驶行为并对违规行为给予警报,对提高驾驶员的驾驶能力并降低其驾驶负荷,协调好驾驶员与车辆以及交通环境之间的关系,从本质上减少交通事故状况的发生,具有重要意义。With the increase in the number of cars and the expansion of road construction scale, problems such as traffic accidents are becoming more and more obvious. China is the country with the largest population and the country with the highest number of road traffic accident deaths, ranking first in the world for several consecutive years. The vast majority of traffic accidents are caused by driver error and fatigue driving. Due to changes in age, physical or mental health, emotions, etc., even a good driver may not be able to maintain his original good driving state for a long time, but it is difficult for the driver himself to realize this gradual attenuation or subside. Therefore, identifying and monitoring the driver's driving behavior and giving an alert to violations can improve the driver's driving ability and reduce his driving load, coordinate the relationship between the driver, the vehicle and the traffic environment, and essentially reduce traffic accidents The occurrence of the situation is of great significance.
目前国内外在识别与监控驾驶员行为方面已取得不少的研究成果,大致可分为主观和客观这两种方法。其中,主观的研究方法包括主观调查表和驾驶员自我记录等。客观的方法包括脑电图、眼电图、肌电图、呼吸气流、呼吸效果、动脉血液氧饱和时的温度和心电图等测量方法。尽管上述方法是比较准确的,但是这些方法是超前或滞后的而非实时的,一般是在驾驶前或驾驶后测量,实时性较差,而且需要在驾驶室有限的空间内安置复杂的检测仪器,安装难度大。此外,现有技术一般只采用单个摄像头来获取驾驶图像,存在视频死角,无法全面采集驾驶图像,精确度较低。At present, many research results have been obtained in identifying and monitoring driver behavior at home and abroad, which can be roughly divided into two methods: subjective and objective. Among them, subjective research methods include subjective questionnaires and driver self-records. Objective methods include measurements such as electroencephalography, electrooculogram, electromyography, respiratory airflow, respiratory effect, temperature at oxygen saturation of arterial blood, and electrocardiogram. Although the above methods are relatively accurate, these methods are leading or lagging rather than real-time. Generally, they are measured before or after driving, and the real-time performance is poor, and complex detection instruments need to be placed in the limited space of the cab. , the installation is difficult. In addition, the existing technology generally only uses a single camera to obtain driving images, and there are dead spots in the video, and the driving images cannot be fully collected, and the accuracy is low.
综上所述,业内亟需一种实时、安装难度小的,高精度的驾驶员行为识别与监控方法及系统。To sum up, the industry urgently needs a real-time, low installation difficulty, high-precision driver behavior recognition and monitoring method and system.
发明内容Contents of the invention
为了解决上述技术问题,本发明的目的是:提供一种实时、安装难度小,高精度的驾驶员行为识别与监控系统。In order to solve the above-mentioned technical problems, the object of the present invention is to provide a real-time, low installation difficulty and high-precision driver behavior recognition and monitoring system.
本发明的另一目的是:提供一种实时、安装难度小,高精度的驾驶员行为识别与监控方法。Another object of the present invention is to provide a real-time, low-installation difficulty, and high-precision driver behavior recognition and monitoring method.
本发明解决其技术问题所采用的技术方案是:一种高精度的驾驶员行为识别与监控系统,包括:The technical solution adopted by the present invention to solve the technical problem is: a high-precision driver behavior recognition and monitoring system, comprising:
双摄像头,用于采集实时视频数据帧;Dual cameras for collecting real-time video data frames;
视频解码器,用于对双摄像头采集的数据进行解码,从而获得实时视频图像数据帧;The video decoder is used to decode the data collected by the dual cameras to obtain real-time video image data frames;
硬盘刻录机,用于存储实时视频图像数据帧,并将实时视频图像数据帧上传给工控机处理中心;Hard disk recorder, used to store real-time video image data frames, and upload real-time video image data frames to the industrial computer processing center;
工控机处理中心,用于根据上传的实时视频图像数据帧进行特征检测、特征提取和实时状态分析,从而对驾驶员行为进行识别与监控;The industrial computer processing center is used to perform feature detection, feature extraction and real-time state analysis based on the uploaded real-time video image data frame, so as to identify and monitor the driver's behavior;
警报模块,用于对异常驾驶行为进行报警;Alarm module, used for alarming abnormal driving behavior;
所述双摄像头依次通过视频解码器、硬盘刻录机和工控机处理中心进而与警报模块的输入端连接。The dual camera is connected with the input terminal of the alarm module through the video decoder, the hard disk recorder and the industrial computer processing center in turn.
进一步,所述工控机处理中心包括:Further, the industrial computer processing center includes:
目标部位检测单元,用于从实时视频图像数据帧中检测出目标部位;The target part detection unit is used to detect the target part from the real-time video image data frame;
目标部位高精度定位单元,用于从候选区域中提取出目标部位特征;The target part high-precision positioning unit is used to extract the target part features from the candidate area;
驾驶状态识别与分析单元,用于将目标部位特征与标准特征进行比对,并根据比对的结果对驾驶员的驾驶行为及状态进行识别与分析;The driving state identification and analysis unit is used to compare the features of the target part with the standard features, and identify and analyze the driving behavior and state of the driver according to the comparison results;
所述目标部位检测单元的输入端与硬盘刻录机的输出端连接,所述目标部位检测单元的输出端通过目标部位高精度定位单元进而和驾驶状态识别与分析单元的输入端连接,所述驾驶状态识别与分析单元的输出端与警报模块的输入端连接。The input end of the target position detection unit is connected with the output end of the hard disk recorder, the output end of the target position detection unit is further connected with the input end of the driving state recognition and analysis unit through the target position high-precision positioning unit, the driving state The output end of the state recognition and analysis unit is connected with the input end of the alarm module.
进一步,还包括为高分辨率双摄像头供电的车载电源以及为硬盘刻录机和工控机处理中心供电的电源模块。Further, it also includes an on-board power supply for high-resolution dual cameras and a power supply module for hard disk recorders and industrial computer processing centers.
本发明解决其技术问题所采用的另一技术方案是:一种高精度的驾驶员行为识别与监控方法,包括:Another technical solution adopted by the present invention to solve the technical problem is: a high-precision driver behavior recognition and monitoring method, including:
A、通过双摄像头获取实时视频帧;A. Obtain real-time video frames through dual cameras;
B、对获取的实时视频帧进行目标部位检测和目标部位特征提取,从而得到实时目标部位特征点;B. Perform target part detection and target part feature extraction on the acquired real-time video frame, so as to obtain real-time target part feature points;
C、将实时目标部位特征点与根据训练样本和目标模板得到的标准目标部位特征点进行比对,并根据比对的结果对驾驶员的驾驶行为进行识别与分析;C. Compare the real-time feature points of the target part with the standard target part feature points obtained according to the training samples and target templates, and identify and analyze the driver's driving behavior according to the comparison results;
D、将识别与分析的结果反馈给用户并进行实时显示。D. Feedback the identification and analysis results to the user and display them in real time.
进一步,所述目标部位包括头部和人脸。Further, the target parts include head and human face.
进一步,所述步骤B,其包括:Further, the step B includes:
B1、根据预设的筛选条件对获取的视频帧进行人脸检测,从而检测出人脸图像,所述预设的筛选条件为:B1. Perform face detection on the acquired video frame according to preset screening conditions, thereby detecting a human face image, and the preset screening conditions are:
其中,m为主函数,K为目标函数,τ为可视系数,为面积系数,为检测到的比例系数;Among them, m is the main function, K is the objective function, τ is the visible coefficient, is the area coefficient, is the detected scale factor;
B2、对人脸图像进行眼睛定位、归一化处理和特征点提取,从而提取出实时人脸特征点。B2. Perform eye positioning, normalization processing and feature point extraction on the face image, so as to extract real-time face feature points.
进一步,所述步骤B2中对人脸图像进行眼睛定位这一步骤,其具体为:Further, in the step B2, the step of eye positioning is performed on the face image, which is specifically:
根据预设的特征点计算公式对人脸图像进行眼睛定位,从而定位出眼睛区域,所述预设的特征点计算公式为:Perform eye positioning on the face image according to a preset feature point calculation formula, thereby locating the eye region, and the preset feature point calculation formula is:
其中,其中,E为单位像素点,L为像素点x的区域面积,D、S、A、N分别为像素点x的四个角点,h为可能定位的目标点。Among them, E is the unit pixel point, L is the area of the pixel point x, D, S, A, N are the four corner points of the pixel point x, and h is the target point that may be located.
进一步,所述步骤B2中对人脸图像进行归一化处理这一步骤,其具体为:Further, the step of normalizing the face image in the step B2 is specifically:
根据两眼睛的距离对眼睛区域进行归一化处理,从而得到有效人脸区域,所述有效人脸区域满足:The eye area is normalized according to the distance between the two eyes, so as to obtain the effective face area, and the effective face area satisfies:
其中,k为眼睛瞳孔的半径,d为两眼之间的距离,为像素系数,为光度系数,V和D为两眼睛之间的坐标点,L为主函数。Among them, k is the radius of the pupil of the eye, d is the distance between the two eyes, is the pixel coefficient, is the photometric coefficient, V and D are the coordinate points between the two eyes, and L is the main function.
进一步,所述步骤B2中对人脸图像进行归一化处理这一步骤还包括对人脸图像进行图像增强的步骤,所述进行图像增强的步骤所采用的公式为:Further, the step of performing normalization processing on the face image in the step B2 also includes the step of image enhancement on the face image, and the formula used in the step of image enhancement is:
其中,j为以人脸中央为半径到中心点d的取值范围,θ为嘴巴中心位置到检测点R的取值范围,D为原始图像像素,K为处理后的目标像素函数,χτ,j为处理区域中的像素点。Among them, j is the value range from the center of the face to the center point d, θ is the value range from the center of the mouth to the detection point R, D is the original image pixel, K is the target pixel function after processing, χ τ , j is the pixel in the processing area.
进一步,所述步骤C,其包括:Further, said step C, which includes:
C1、将实时目标部位特征点与根据训练样本和目标模板得到的标准目标部位特征点进行比对,从而对视频目标图像进行识别,所述对视频目标图像进行识别所依据的公式为:C1, comparing the real-time target part feature points with the standard target part feature points obtained according to the training sample and the target template, so as to identify the video target image, the formula based on which the video target image is identified is:
其中,α为像素系数,K为定值常数,χτ均为视频目标图像的横坐标,υj为视频目标图像可跟踪系数的纵坐标,γ为灰度系数,i、j为数列系数;Among them, α is the pixel coefficient, K is a fixed value constant, χ τ are the abscissas of the video target image, υ j are the ordinates of the trackable coefficients of the video target image, γ is the gray scale coefficient, and i, j are the sequence coefficients;
C2、根据识别的结果对驾驶员的驾驶行为进行分析。C2. Analyze the driver's driving behavior according to the recognition result.
本发明的系统的有益效果是:通过工控机处理中心,能够实现完全自然的非接触地分对驾驶员的驾驶行为进行检测、识别与监控,再结合实时视频采集结果,可以快捷灵活地获取实时数据,对实时视频帧进行分析,解决了现有技术非实时性的不足;所需的检测仪器够较少,降低了硬件设备的安装难度,节约驾驶室空间;采用了双摄像头来采集实时视频图像,克服了只采用单摄像头采集图像的不足,能对驾驶图像进行全面采集,更加精确。The beneficial effects of the system of the present invention are: through the processing center of the industrial computer, the detection, identification and monitoring of the driver's driving behavior can be realized in a completely natural and non-contact manner, combined with the real-time video collection results, the real-time Data, analyzing the real-time video frame, solves the lack of non-real-time performance of the existing technology; the required detection instruments are small enough, which reduces the difficulty of installing hardware equipment and saves cab space; dual cameras are used to collect real-time video The image overcomes the shortcomings of only using a single camera to collect images, and can comprehensively collect driving images, which is more accurate.
本发明的方法的有益效果是:通过双摄像头采集的实时视频数据,可以快捷灵活地获取实时数据,并对实时视频帧进行分析,解决了现有技术非实时性的不足;采用了双摄像头来采集实时视频图像,克服了只采用单摄像头采集图像的不足,能对驾驶图像进行全面采集,更加精确;通过目标部位检测、目标部位特征提取和特征点比对,能够对驾驶员脸部表情、脸部特征眼睛运动规律、眼睛状态、头部运动规律等进行实时识别,进一步提高了识别与监控的准确度。The beneficial effects of the method of the present invention are: the real-time video data collected by the dual cameras can quickly and flexibly acquire the real-time data, and analyze the real-time video frames, which solves the non-real-time deficiency of the prior art; Collecting real-time video images overcomes the shortcomings of only using a single camera to collect images, and can comprehensively collect driving images, which is more accurate; through target part detection, target part feature extraction and feature point comparison, it is possible to analyze the driver's facial expressions, Real-time recognition of facial features, eye movement rules, eye status, and head movement rules, which further improves the accuracy of recognition and monitoring.
附图说明Description of drawings
下面结合附图和实施例对本发明作进一步说明。The present invention will be further described below in conjunction with drawings and embodiments.
图1为本发明一种高精度的驾驶员行为识别与监控系统的功能模块框图;Fig. 1 is a functional block diagram of a high-precision driver behavior recognition and monitoring system of the present invention;
图2为本发明工控机处理中心的结构框图;Fig. 2 is the structural block diagram of industrial computer processing center of the present invention;
图3为本发明一种高精度的驾驶员行为识别与监控方法的步骤流程图;Fig. 3 is a flow chart of the steps of a high-precision driver behavior recognition and monitoring method of the present invention;
图4为本发明步骤B的流程图;Fig. 4 is the flowchart of step B of the present invention;
图5为本发明步骤C的流程图。Fig. 5 is a flowchart of step C of the present invention.
具体实施方式Detailed ways
参照图1,一种高精度的驾驶员行为识别与监控系统,包括:Referring to Figure 1, a high-precision driver behavior recognition and monitoring system includes:
双摄像头,用于采集实时视频数据帧;Dual cameras for collecting real-time video data frames;
视频解码器,用于对双摄像头采集的数据进行解码,从而获得实时视频图像数据帧;The video decoder is used to decode the data collected by the dual cameras to obtain real-time video image data frames;
硬盘刻录机,用于存储实时视频图像数据帧,并将实时视频图像数据帧上传给工控机处理中心;Hard disk recorder, used to store real-time video image data frames, and upload real-time video image data frames to the industrial computer processing center;
工控机处理中心,用于根据上传的实时视频图像数据帧进行特征检测、特征提取和实时状态分析,从而对驾驶员行为进行识别与监控;The industrial computer processing center is used to perform feature detection, feature extraction and real-time state analysis based on the uploaded real-time video image data frame, so as to identify and monitor the driver's behavior;
警报模块,用于对异常驾驶行为进行报警;Alarm module, used for alarming abnormal driving behavior;
所述双摄像头依次通过视频解码器、硬盘刻录机和工控机处理中心进而与警报模块的输入端连接。The dual cameras are connected with the input terminal of the alarm module through the video decoder, the hard disk recorder and the industrial computer processing center in turn.
参照图2,进一步作为优选的实施方式,所述工控机处理中心包括:Referring to Fig. 2, further as a preferred embodiment, the industrial computer processing center includes:
目标部位检测单元,用于从实时视频图像数据帧中检测出目标部位;The target part detection unit is used to detect the target part from the real-time video image data frame;
目标部位高精度定位单元,用于从候选区域中提取出目标部位特征;The target part high-precision positioning unit is used to extract the target part features from the candidate area;
驾驶状态识别与分析单元,用于将目标部位特征与标准特征进行比对,并根据比对的结果对驾驶员的驾驶行为及状态进行识别与分析;The driving state identification and analysis unit is used to compare the features of the target part with the standard features, and identify and analyze the driving behavior and state of the driver according to the comparison results;
所述目标部位检测单元的输入端与硬盘刻录机的输出端连接,所述目标部位检测单元的输出端通过目标部位高精度定位单元进而和驾驶状态识别与分析单元的输入端连接,所述驾驶状态识别与分析单元的输出端与警报模块的输入端连接。The input end of the target position detection unit is connected with the output end of the hard disk recorder, the output end of the target position detection unit is further connected with the input end of the driving state recognition and analysis unit through the target position high-precision positioning unit, the driving state The output end of the state recognition and analysis unit is connected with the input end of the alarm module.
其中,目标部位包括人脸和头部等。Wherein, the target parts include human face and head.
参照图1,进一步作为优选的实施方式,还包括为高分辨率双摄像头供电的车载电源以及为硬盘刻录机和工控机处理中心供电的电源模块。Referring to FIG. 1 , as a further preferred embodiment, it also includes an on-board power supply for the high-resolution dual cameras and a power supply module for the hard disk recorder and the industrial computer processing center.
参照图3,一种高精度的驾驶员行为识别与监控方法,包括:Referring to Figure 3, a high-precision driver behavior recognition and monitoring method includes:
A、通过双摄像头获取实时视频帧;A. Obtain real-time video frames through dual cameras;
B、对获取的实时视频帧进行目标部位检测和目标部位特征提取,从而得到实时目标部位特征点;B. Perform target part detection and target part feature extraction on the acquired real-time video frame, so as to obtain real-time target part feature points;
C、将实时目标部位特征点与根据训练样本和目标模板得到的标准目标部位特征点进行比对,并根据比对的结果对驾驶员的驾驶行为进行识别与分析:C. Compare the real-time feature points of the target part with the standard target part feature points obtained according to the training samples and target templates, and identify and analyze the driver's driving behavior according to the comparison results:
D、将识别与分析的结果反馈给用户并进行实时显示。D. Feedback the identification and analysis results to the user and display them in real time.
进一步作为优选的实施方式,所述目标部位包括头部和人脸。As a further preferred embodiment, the target site includes a head and a human face.
参照图4,进一步作为优选的实施方式,所述步骤B,其包括:Referring to Fig. 4, further as a preferred embodiment, the step B includes:
B1、根据预设的筛选条件对获取的视频帧进行人脸检测,从而检测出人脸图像,所述预设的筛选条件为:B1. Perform face detection on the acquired video frame according to preset screening conditions, thereby detecting a human face image, and the preset screening conditions are:
其中,m为主函数,K为目标函数,τ为可视系数,为面积系数,为检测到的比例系数;Among them, m is the main function, K is the objective function, τ is the visible coefficient, is the area coefficient, is the detected scale factor;
B2、对人脸图像进行眼睛定位、归一化处理和特征点提取,从而提取出实时人脸特征点。B2. Perform eye positioning, normalization processing and feature point extraction on the face image, so as to extract real-time face feature points.
其中,本发明的人脸检测和头部检测过程完全一致,而人脸检测过程的作用更大。Wherein, the human face detection of the present invention is completely consistent with the head detection process, and the effect of the human face detection process is greater.
归一化处理,包含了图像预处理,图像缩放以及有效人脸区域选取等操作。Normalization processing includes operations such as image preprocessing, image scaling, and effective face area selection.
进一步作为优选的实施方式,所述步骤B2中对人脸图像进行眼睛定位这一步骤,其具体为:Further as a preferred embodiment, in the step B2, the step of eye positioning is performed on the face image, which is specifically:
根据预设的特征点计算公式对人脸图像进行眼睛定位,从而定位出眼睛区域,所述预设的特征点计算公式为:Perform eye positioning on the face image according to a preset feature point calculation formula, thereby locating the eye region, and the preset feature point calculation formula is:
其中,其中,E为单位像素点,L为像素点x的区域面积,D、S、A、N分别为像素点x的四个角点,h为可能定位的目标点。Among them, E is the unit pixel point, L is the area of the pixel point x, D, S, A, N are the four corner points of the pixel point x, and h is the target point that may be located.
进一步作为优选的实施方式,所述步骤B2中对人脸图像进行归一化处理这一步骤,其具体为:Further as a preferred embodiment, the step of normalizing the face image in the step B2 is specifically:
根据两眼睛的距离对眼睛区域进行归一化处理,从而得到有效人脸区域,所述有效人脸区域满足:The eye area is normalized according to the distance between the two eyes, so as to obtain the effective face area, and the effective face area satisfies:
其中,k为眼睛瞳孔的半径,d为两眼之间的距离,为像素系数,为光度系数,V和D为两眼睛之间的坐标点,L为主函数。Among them, k is the radius of the pupil of the eye, d is the distance between the two eyes, is the pixel coefficient, is the photometric coefficient, V and D are the coordinate points between the two eyes, and L is the main function.
进一步作为优选的实施方式,所述步骤B2中对人脸图像进行归一化处理这一步骤还包括对人脸图像进行图像增强的步骤,所述进行图像增强的步骤所采用的公式为:Further as a preferred embodiment, the step of performing normalization processing on the face image in the step B2 also includes the step of image enhancement on the face image, and the formula used in the step of image enhancement is:
其中,j为以人脸中央为半径到中心点d的取值范围,θ为嘴巴中心位置到检测点R的取值范围,D为原始图像像素,K为处理后的目标像素函数,χτ,j为处理区域中的像素点。Among them, j is the value range from the center of the face to the center point d, θ is the value range from the center of the mouth to the detection point R, D is the original image pixel, K is the target pixel function after processing, χ τ , j is the pixel in the processing area.
参照图5,进一步作为优选的实施方式,所述步骤C,其包括:Referring to Fig. 5, further as a preferred embodiment, the step C includes:
C1、将实时目标部位特征点与根据训练样本和目标模板得到的标准目标部位特征点进行比对,从而对视频目标图像进行识别,所述对视频目标图像进行识别所依据的公式为:C1, comparing the real-time target part feature points with the standard target part feature points obtained according to the training sample and the target template, so as to identify the video target image, the formula based on which the video target image is identified is:
其中,α为像素系数,K为定值常数,χτ均为视频目标图像的横坐标,υj为视频目标图像可跟踪系数的纵坐标,γ为灰度系数,i、j为数列系数;Among them, α is the pixel coefficient, K is a fixed value constant, χ τ are the abscissas of the video target image, υ j are the ordinates of the trackable coefficients of the video target image, γ is the gray scale coefficient, and i, j are the sequence coefficients;
C2、根据识别的结果对驾驶员的驾驶行为进行分析。C2. Analyze the driver's driving behavior according to the recognition result.
下面结合具体实施例对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with specific embodiments.
实施例一Embodiment one
参照图1,本发明的第一实施例:Referring to Fig. 1, the first embodiment of the present invention:
本发明采取双摄像头来进行视频采集,能够实现3D效果,且能够弥补单一摄像头的不足,起到全面和低误差的捕获效果。其中,双摄像头能够快速采集实时视频。本发明的驾驶员行为识别与监控系统主要由视频采集模块和工控机处理中心两部分组成,其中,视频采集模块包括双摄像头、视频解码器和硬盘刻录机,整个系统结构如图1所示。其中,两个摄像头的选型是统一的,其设备接口提供多种实时数据传输模式,每种模式采用不同大小的数据包,速度最高可达7.5Mbps,具有16个256k的动态内存,支持16位或8位YUV 4:2:2或4:1:1RGB的数据格式。该系统工作时,双摄像头采集实时视频数据帧,经处理后存放到硬盘刻录机中的视频存储空间,然后通过工控机处理中心的显示视频窗口进行显示,并根据目标要求对视频进行分析并跟踪,最后进行状态分析,并将分析结果反映出来。The present invention adopts double cameras to collect video, can realize 3D effect, can make up for the deficiency of a single camera, and achieves a comprehensive and low-error capture effect. Among them, the dual camera can quickly capture real-time video. The driver's behavior recognition and monitoring system of the present invention is mainly composed of two parts, a video acquisition module and an industrial computer processing center, wherein the video acquisition module includes dual cameras, a video decoder and a hard disk recorder, and the whole system structure is as shown in Figure 1. Among them, the selection of the two cameras is unified, and its device interface provides a variety of real-time data transmission modes. Each mode uses data packets of different sizes, and the speed can reach up to 7.5Mbps. It has 16 dynamic memories of 256k and supports 16 bit or 8-bit YUV 4:2:2 or 4:1:1RGB data format. When the system is working, the dual cameras collect real-time video data frames, which are processed and stored in the video storage space of the hard disk recorder, and then displayed through the display video window of the industrial computer processing center, and the video is analyzed and tracked according to the target requirements , and finally conduct state analysis and reflect the analysis results.
实施例二Embodiment two
本实施例对人脸图像进行预处理的过程进行说明。This embodiment describes the process of preprocessing a face image.
考虑到数据采集在现实生活的可行性,本发明为了保证人脸图像中人脸大小,位置以及人脸图像质量的一致性,在进行人脸特征提取前需要对图像进行数据预处理。图像数据预处理的主要目的是消除图像中无关的信息,滤除干扰、噪声,恢复有用的真实信息,增强有关信息的可检测性和最大限度地简化数据,从而改进特征抽取、图像分割、匹配和识别的可靠性。Considering the feasibility of data collection in real life, in order to ensure the consistency of face size, position and face image quality in the face image, the present invention needs to perform data preprocessing on the image before face feature extraction. The main purpose of image data preprocessing is to eliminate irrelevant information in the image, filter out interference and noise, restore useful real information, enhance the detectability of relevant information and simplify data to the greatest extent, thereby improving feature extraction, image segmentation, and matching. and recognition reliability.
本发明人脸图像的预处理过程主要包括人脸扶正、人脸图像的增强以及归一化等过程。人脸扶正是为了得到人脸位置端正的人脸图像,图像增强是为了改善人脸图像的质量,不仅在视觉上更加清晰图像,而且使图像更利于计算机的处理与识别。其中,图像增强使用如下公式:The preprocessing process of the human face image in the present invention mainly includes processes such as face straightening, enhancement and normalization of the human face image. Face straightening is to obtain a face image with a correct face position, and image enhancement is to improve the quality of the face image, not only to make the image clearer visually, but also to make the image more conducive to computer processing and recognition. Among them, image enhancement uses the following formula:
而归一化工作的目标是取得尺寸一致,灰度取值范围相同的标准化人脸图像。The goal of normalization work is to obtain standardized face images with the same size and the same gray value range.
实施例三Embodiment Three
本实施例对本发明的训练样本和目标模板的生成过程进行介绍。This embodiment introduces the process of generating training samples and target templates of the present invention.
本发明的人脸检测与识别过程采用基于AdaBoost的分类器选择方法,给定一个特征集合和一个包含正样本和负样本图像的训练集。任何机器学习的方法都可以用于通过学习来训练本发明的分类函数。因此,本发明采用AdaBoost方法来进行样本训练和选择特征和训练分类器,从而生成用于提取标准特征点的训练样本和目标模板。The face detection and recognition process of the present invention adopts a classifier selection method based on AdaBoost, and a feature set and a training set including positive and negative sample images are given. Any machine learning method can be used to train the classification function of the present invention through learning. Therefore, the present invention uses the AdaBoost method to perform sample training, select features and train classifiers, thereby generating training samples and target templates for extracting standard feature points.
与现有技术相比,本发明通过工控机处理中心,能够实现完全自然的非接触地对驾驶员的驾驶行为进行检测、识别与监控,再结合实时视频采集结果,可以快捷灵活地获取实时数据,对实时视频帧进行分析,解决了现有技术非实时性的不足;所需的检测仪器够较少,降低了硬件设备的安装难度,节约驾驶室空间;采用了双摄像头来采集实时视频图像,克服了只采用单摄像头采集图像的不足,能对驾驶图像进行全面采集,更加精确;通过目标部位检测、目标部位特征提取和特征点比对,能够对驾驶员脸部表情、脸部特征眼睛运动规律、眼睛状态、头部运动规律等进行实时识别,进一步提高了识别与监控的准确度。Compared with the prior art, the present invention can detect, recognize and monitor the driver's driving behavior in a completely natural and non-contact manner through the processing center of the industrial computer, and combined with the real-time video collection results, the real-time data can be obtained quickly and flexibly , to analyze the real-time video frame, which solves the non-real-time deficiency of the existing technology; the required detection instruments are less enough, which reduces the difficulty of installing hardware equipment and saves cab space; adopts dual cameras to collect real-time video images , overcome the shortcomings of only using a single camera to collect images, and can comprehensively collect driving images, which is more accurate; through target part detection, target part feature extraction and feature point comparison, it is possible to analyze the driver's facial expressions, facial features, and eyes. Real-time recognition of movement rules, eye status, head movement rules, etc., further improving the accuracy of recognition and monitoring.
以上是对本发明的较佳实施进行了具体说明,但本发明创造并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。The above is a specific description of the preferred implementation of the present invention, but the invention is not limited to the described embodiments, and those skilled in the art can also make various equivalent deformations or replacements without violating the spirit of the present invention. , these equivalent modifications or replacements are all within the scope defined by the claims of the present application.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410284311.5A CN104077568A (en) | 2014-06-23 | 2014-06-23 | High-accuracy driver behavior recognition and monitoring method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410284311.5A CN104077568A (en) | 2014-06-23 | 2014-06-23 | High-accuracy driver behavior recognition and monitoring method and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN104077568A true CN104077568A (en) | 2014-10-01 |
Family
ID=51598816
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410284311.5A Pending CN104077568A (en) | 2014-06-23 | 2014-06-23 | High-accuracy driver behavior recognition and monitoring method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104077568A (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105469035A (en) * | 2015-11-17 | 2016-04-06 | 中国科学院重庆绿色智能技术研究院 | Driver's bad driving behavior detection system based on binocular video analysis |
| CN106200911A (en) * | 2016-06-30 | 2016-12-07 | 成都西可科技有限公司 | A kind of motion sensing control method based on dual camera, mobile terminal and system |
| CN107531249A (en) * | 2015-02-19 | 2018-01-02 | 雷诺股份公司 | Method and apparatus for detecting the change of motor vehicles driving behavior |
| CN108229345A (en) * | 2017-12-15 | 2018-06-29 | 吉利汽车研究院(宁波)有限公司 | A kind of driver's detecting system |
| CN108423004A (en) * | 2018-05-16 | 2018-08-21 | 浙江吉利控股集团有限公司 | A kind of binocular identification driver status detecting system and method |
| CN109145734A (en) * | 2018-07-17 | 2019-01-04 | 深圳市巨龙创视科技有限公司 | Algorithm is captured in IPC Intelligent human-face identification based on 4K platform |
| CN111047874A (en) * | 2019-12-19 | 2020-04-21 | 中科寒武纪科技股份有限公司 | Intelligent traffic violation management method and related product |
| CN113507642A (en) * | 2021-09-10 | 2021-10-15 | 江西省天轴通讯有限公司 | Video segmentation method, system, storage medium and device |
| CN117519488A (en) * | 2024-01-05 | 2024-02-06 | 四川中电启明星信息技术有限公司 | A dialogue method and dialogue system for dialogue robots |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101599207A (en) * | 2009-05-06 | 2009-12-09 | 深圳市汉华安道科技有限责任公司 | A kind of fatigue driving detection device and automobile |
| CN101872419A (en) * | 2010-06-09 | 2010-10-27 | 谭台哲 | Method for detecting fatigue of automobile driver |
| CN102860830A (en) * | 2012-09-12 | 2013-01-09 | 上海大学 | Field programmable gate array-based (FPGA-based) fatigue driving binocular detection hardware platform |
| US20130162794A1 (en) * | 2011-12-26 | 2013-06-27 | Denso Corporation | Driver monitoring apparatus |
-
2014
- 2014-06-23 CN CN201410284311.5A patent/CN104077568A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101599207A (en) * | 2009-05-06 | 2009-12-09 | 深圳市汉华安道科技有限责任公司 | A kind of fatigue driving detection device and automobile |
| CN101872419A (en) * | 2010-06-09 | 2010-10-27 | 谭台哲 | Method for detecting fatigue of automobile driver |
| US20130162794A1 (en) * | 2011-12-26 | 2013-06-27 | Denso Corporation | Driver monitoring apparatus |
| CN102860830A (en) * | 2012-09-12 | 2013-01-09 | 上海大学 | Field programmable gate array-based (FPGA-based) fatigue driving binocular detection hardware platform |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107531249A (en) * | 2015-02-19 | 2018-01-02 | 雷诺股份公司 | Method and apparatus for detecting the change of motor vehicles driving behavior |
| CN107531249B (en) * | 2015-02-19 | 2020-03-27 | 雷诺股份公司 | Method and device for detecting a change in the behaviour of a driver of a motor vehicle |
| CN105469035A (en) * | 2015-11-17 | 2016-04-06 | 中国科学院重庆绿色智能技术研究院 | Driver's bad driving behavior detection system based on binocular video analysis |
| CN106200911A (en) * | 2016-06-30 | 2016-12-07 | 成都西可科技有限公司 | A kind of motion sensing control method based on dual camera, mobile terminal and system |
| CN108229345A (en) * | 2017-12-15 | 2018-06-29 | 吉利汽车研究院(宁波)有限公司 | A kind of driver's detecting system |
| CN108423004A (en) * | 2018-05-16 | 2018-08-21 | 浙江吉利控股集团有限公司 | A kind of binocular identification driver status detecting system and method |
| CN109145734A (en) * | 2018-07-17 | 2019-01-04 | 深圳市巨龙创视科技有限公司 | Algorithm is captured in IPC Intelligent human-face identification based on 4K platform |
| CN111047874A (en) * | 2019-12-19 | 2020-04-21 | 中科寒武纪科技股份有限公司 | Intelligent traffic violation management method and related product |
| CN113507642A (en) * | 2021-09-10 | 2021-10-15 | 江西省天轴通讯有限公司 | Video segmentation method, system, storage medium and device |
| CN117519488A (en) * | 2024-01-05 | 2024-02-06 | 四川中电启明星信息技术有限公司 | A dialogue method and dialogue system for dialogue robots |
| CN117519488B (en) * | 2024-01-05 | 2024-03-29 | 四川中电启明星信息技术有限公司 | Dialogue method and dialogue system of dialogue robot |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104077568A (en) | High-accuracy driver behavior recognition and monitoring method and system | |
| CN110826538B (en) | An Abnormal Leaving Recognition System for Electric Power Business Hall | |
| CN109670441B (en) | Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet | |
| CN108216252B (en) | Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system | |
| CN112396658B (en) | Indoor personnel positioning method and system based on video | |
| US9355306B2 (en) | Method and system for recognition of abnormal behavior | |
| WO2021047232A1 (en) | Interaction behavior recognition method, apparatus, computer device, and storage medium | |
| CN110096945B (en) | Indoor monitoring video key frame real-time extraction method based on machine learning | |
| CN108038424B (en) | A visual automatic detection method suitable for aerial work | |
| CN103340637A (en) | System and method for driver alertness intelligent monitoring based on fusion of eye movement and brain waves | |
| CN103517042A (en) | Nursing home old man dangerous act monitoring method | |
| CN109543542A (en) | A kind of determination method whether particular place personnel dressing standardizes | |
| Thaman et al. | Face mask detection using mediapipe facemesh | |
| CN110458198A (en) | Multi-resolution target recognition method and device | |
| WO2021068781A1 (en) | Fatigue state identification method, apparatus and device | |
| CN115937928A (en) | Learning status monitoring method and system based on multi-visual feature fusion | |
| CN114973135A (en) | A head-and-shoulders-based time-series video sleeping post identification method, system and electronic device | |
| CN114639168B (en) | Method and system for recognizing running gesture | |
| CN118314556A (en) | Fatigue driving detection method, system, computer equipment and storage medium | |
| CN116894978B (en) | An online exam anti-cheating system that integrates facial emotions and behavioral features | |
| CN111667599A (en) | Face recognition card punching system and method | |
| KR20200005853A (en) | Method and System for People Count based on Deep Learning | |
| GB2634152A (en) | Method for assessing driver drowsiness based on view angle correction and improved vivit | |
| CN116343343B (en) | Intelligent evaluation method for crane lifting command action based on cloud end architecture | |
| CN117522186B (en) | Railway construction hidden engineering acceptance system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141001 |