CN111824406A - A public safety autonomous inspection quadrotor UAV based on machine vision - Google Patents
A public safety autonomous inspection quadrotor UAV based on machine vision Download PDFInfo
- Publication number
- CN111824406A CN111824406A CN202010691797.XA CN202010691797A CN111824406A CN 111824406 A CN111824406 A CN 111824406A CN 202010691797 A CN202010691797 A CN 202010691797A CN 111824406 A CN111824406 A CN 111824406A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- data
- processing module
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C27/00—Rotorcraft; Rotors peculiar thereto
- B64C27/04—Helicopters
- B64C27/08—Helicopters with two or more rotors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U10/00—Type of UAV
- B64U10/10—Rotorcrafts
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J5/0022—Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
- G01J5/0025—Living bodies
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
- B64U2201/20—Remote controls
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J2005/0077—Imaging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Aviation & Aerospace Engineering (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
本发明公开了一种基于机器视觉的公共安全自主巡查四旋翼无人机,属于飞行器领域;由动力系统、飞行控制系统、数据处理系统、移动终端系统组成。系统运行时,将由移动终端系统上传至数据处理系统的目标模型对图像采集模块所获得的实时图像数据进行分析处理以获得图像中的目标种类,当识别为人类后,利用图像数据计算出人员密度以判断该场景是否为可疑危险人员集群,然后逐个抵近人员进行面部数据扫描,通过红外热像仪获得人体温度数据以判断人体温度是否正常,同时将人脸外部特征与罪犯人脸库中图片进行对比判断是否为在逃人员。
The invention discloses a public safety autonomous inspection quadrotor unmanned aerial vehicle based on machine vision, which belongs to the field of aircraft and consists of a power system, a flight control system, a data processing system and a mobile terminal system. When the system is running, the target model uploaded by the mobile terminal system to the data processing system will analyze and process the real-time image data obtained by the image acquisition module to obtain the target type in the image. In order to judge whether the scene is a cluster of suspicious and dangerous people, and then scan the facial data of the approaching people one by one, obtain the body temperature data through the infrared thermal imager to judge whether the body temperature is normal, and at the same time compare the external features of the face with the pictures in the criminal face database. A comparison is made to determine whether they are fugitives.
Description
技术领域technical field
本发明属于飞行器领域,具体涉及一种基于机器视觉的公共安全自主巡查四旋翼无人机。The invention belongs to the field of aircraft, and in particular relates to a quadrotor unmanned aerial vehicle for autonomous inspection of public safety based on machine vision.
背景技术Background technique
无人机技术发展迅速,中小型无人机凭借其便携的体型被运用于多种场景,如搜救、航拍摄影、自然资源勘察、野外狩猎、目标监控侦察等。小型四旋翼无人机更是凭借其易操控,空中活动灵活的优势,借助视觉传感器,可以获得更多的环境信息,其应用场景也比其他飞行器多得多。基于机器视觉的四旋翼无人机应用有目标跟踪,交通管理,空中导航等,但目前这些功能的实现大都离不开操作人员,自动化程度十分有限,应用人工智能中的图像识别技术实现对人类行为识别的无人机应用更是少有。只有部分目标识别技术采用地面站对图像信息进行处理,能够识别目标,此方法虽然能够处理复杂的模型,但是时间延迟相比于机载图像处理板处理图像信息高得多。With the rapid development of UAV technology, small and medium-sized UAVs are used in a variety of scenarios due to their portable size, such as search and rescue, aerial photography, natural resource surveys, wild hunting, target monitoring and reconnaissance, etc. Small quad-rotor UAVs can obtain more environmental information by virtue of their advantages of easy control and flexible aerial activities, and their application scenarios are much more than other aircraft. The application of quadrotor UAV based on machine vision includes target tracking, traffic management, air navigation, etc., but at present, the realization of these functions is inseparable from the operator, the degree of automation is very limited, the application of image recognition technology in artificial intelligence to achieve human UAV applications for behavior recognition are even rarer. Only some target recognition technologies use the ground station to process image information and can recognize the target. Although this method can handle complex models, the time delay is much higher than that of the airborne image processing board to process image information.
目前有专利文献提出无人机用于安防领域的实施方法,如名称为“一种多旋翼无人机动态安防系统”(公开号为CN207078318U)和名称为“一种基于无人机的安防系统”(公开号为CN110766907A)的专利文献,通过无人机搭载FPV图像系统以及一些功能模块获得无人机所监测环境的数据,再将数据发送至地面站供操作人员判断是否安全。上述两个专利采用的都是使用数据传输系统将获得的所有信息传输至地面站供操作人员判断是否安全,所采集的图像处理的过程都由人工完成,此实施方法耗时耗力,没有体现无人机的自主化、智能化。At present, there are patent documents that propose implementation methods for UAVs in the field of security, such as "a multi-rotor UAV dynamic security system" (publication number CN207078318U) and "a UAV-based security system" ” (publication number CN110766907A), the data of the environment monitored by the UAV is obtained through the UAV equipped with the FPV image system and some functional modules, and then the data is sent to the ground station for the operator to judge whether it is safe. The above two patents use a data transmission system to transmit all the obtained information to the ground station for operators to judge whether it is safe or not. The process of processing the collected images is done manually. This implementation method is time-consuming and labor-intensive, and does not reflect the Autonomous and intelligent drones.
也有专利文献提出应用人脸识别技术对罪犯进行识别并绘制罪犯行踪地图的实施方法,如专利名称为“一种基于人脸识别技术的罪犯行踪地图绘制系统及其方法”(授权公告号为CN103699677B),该专利所述系统的实施方法为利用道路和街道上安装的固定摄像头所采集的人脸图像信息与警方提供的罪犯的人脸数据进行对比,从而寻找出罪犯的踪迹。采用数量庞大的固定摄像头采集人脸数据本身是一个信息量十分巨大的操作,从采集到罪犯人脸数据到处理完图像数据发现罪犯所需时间也是漫长的。相比于发现人体目标然后主动获取目标人脸数据的方法,该专利所述的人脸数据采集的方法效率低,被动性高、随机性大。其次,固定摄像头所覆盖的区域有限,对于反侦察意识高的罪犯其作用微乎其微。综上对该专利文献所述,该专利实施方法所采用的罪犯踪迹的寻找方法效率低、可靠性不高、环境应对适应能立差、实用性不强。There are also patent documents that propose an implementation method of using face recognition technology to identify criminals and draw criminal whereabouts maps. ), the implementation method of the system described in this patent is to use the facial image information collected by fixed cameras installed on roads and streets to compare with the facial data of criminals provided by the police, so as to find the traces of criminals. Using a large number of fixed cameras to collect face data itself is an operation with a huge amount of information, and it takes a long time from collecting the criminal face data to processing the image data to find the criminal. Compared with the method of discovering a human target and then actively acquiring the target face data, the face data collection method described in this patent has low efficiency, high passivity and high randomness. Secondly, the area covered by the fixed camera is limited, and it has little effect on criminals with high anti-reconnaissance awareness. To sum up, the method for finding criminal traces adopted by the patent implementation method has low efficiency, low reliability, poor adaptability to the environment, and low practicability.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提出一种基于机器视觉的公共安全自主巡查四旋翼无人机,该无人机可实现全天候实时自动寻找可疑人员或者在逃犯罪分子并进行自动跟踪、自动对人体目标进行体温安全检测和自主检测可疑人员集群的功能。这些功能的实现都不需要人工介入,凭借无人机自身就能够自主完成。The purpose of the present invention is to propose a public safety autonomous inspection quadrotor drone based on machine vision, which can automatically search for suspicious persons or fugitive criminals in real-time around the clock, and automatically track and automatically monitor the body temperature of human targets. The ability to detect and autonomously detect clusters of suspicious persons. The realization of these functions does not require human intervention, and can be completed autonomously by the drone itself.
本发明的技术方案如下:一种基于机器视觉的公共安全自主巡查四旋翼无人机,由动力系统、飞行控制系统、数据处理系统、移动终端系统组成;其特征在于:The technical scheme of the present invention is as follows: a public safety autonomous inspection quadrotor UAV based on machine vision, which is composed of a power system, a flight control system, a data processing system and a mobile terminal system; it is characterized in that:
动力系统用于为无人机提供飞行动力以及为飞行控制系统和数据处理系统提供电源;The power system is used to provide flight power for the UAV and power supply for the flight control system and data processing system;
飞行控制系统用于控制四旋翼无人机的飞行,并为无人机获取目标图像提供稳定的平台;The flight control system is used to control the flight of the quadrotor UAV and provide a stable platform for the UAV to obtain the target image;
数据处理系统用于获取并处理图像数据、建立与移动终端系统的通信以及发送运动控制指令给飞行控制系统;数据处理系统通过高清摄像头获取图像数据并处理后能够自主识别可疑人员集群和自动寻找可疑目标或者犯罪分子,发现目标后能够控制飞行控制系统对目标进行自动跟踪,还能通过红外热像仪自动测量人体目标额头温度;The data processing system is used to acquire and process image data, establish communication with the mobile terminal system, and send motion control instructions to the flight control system; the data processing system acquires image data through a high-definition camera and processes it, and can autonomously identify clusters of suspicious persons and automatically find suspicious persons. Targets or criminals can control the flight control system to automatically track the target after finding the target, and can also automatically measure the forehead temperature of the human target through an infrared thermal imager;
移动终端系统用于完成人机交互和与数据处理系统之间的信息传输。The mobile terminal system is used to complete human-computer interaction and information transmission with the data processing system.
进一步的,所述四旋翼无人机机身结构自上而下由机架、机架底座、起落架组成,机架底座为二块四边形板,四根旋转轴穿插在二块四边形板的四端角落上,第一块四边形板悬空在四根旋转轴上;四根旋转轴顶端架设有机架,机架为X型机架,四根旋转轴底端连接第二块四边形板,第二块四边形板下端连接起落架;动力系统和飞行控制系统设置在机架底座的第一块四边形上端,数据处理系统设置在机架底座的第一块四边形下端。Further, the fuselage structure of the four-rotor UAV is composed of a frame, a frame base, and a landing gear from top to bottom. On the end corners, the first quadrilateral plate is suspended on the four rotating shafts; the top of the four rotating shafts is set with a frame, the frame is an X-shaped frame, and the bottom ends of the four rotating shafts are connected to the second quadrilateral plate, the second quadrilateral plate is connected to the bottom of the four rotating shafts. The lower end of the quadrilateral plate is connected to the landing gear; the power system and the flight control system are arranged on the upper end of the first quadrilateral of the frame base, and the data processing system is arranged at the lower end of the first quadrilateral of the frame base.
进一步的,所述动力系统由无刷电机、桨叶、高倍率航模锂电池和电源稳压板组成;机架的四个端部的分别安装有一个无刷电机,每个无刷电机上固定有桨叶,高倍率航模锂电池设置在机架底座的第一块四边形板上端,电源稳压板垂直设置在第一块四边形和第二块四边形的侧端,高度和第一块四边形平衡。Further, the power system is composed of a brushless motor, a blade, a high-rate model aircraft lithium battery and a power supply voltage stabilizer board; a brushless motor is installed at the four ends of the frame, and each brushless motor is fixed on the There are paddles, the high-rate lithium battery is arranged on the upper end of the first quadrilateral board of the frame base, and the power voltage stabilizer plate is vertically arranged on the side ends of the first quadrilateral and the second quadrilateral, and the height is balanced with the first quadrilateral.
进一步的,所述飞行控制系统由嵌入式飞行微控制器、IMU集成传感器、单目超声波传感器、光流传感器、GPS天线组成;机架上部悬空设有嵌入式飞行微控制器,嵌入式飞行微控制器上设有IMU集成传感器,在高倍率航模锂电池的下端和电源稳压板侧端并行设有单目超声波传感器和光流传感器,GPS天线通过其中一根旋转轴加长悬空设置在最顶端,其中IMU集成传感器由六轴运动传感器、磁力计和气压计集成。Further, the flight control system is composed of an embedded flight microcontroller, an IMU integrated sensor, a monocular ultrasonic sensor, an optical flow sensor, and a GPS antenna; the upper part of the rack is suspended with an embedded flight microcontroller, and the embedded flight micro The controller is equipped with an IMU integrated sensor, and a monocular ultrasonic sensor and an optical flow sensor are arranged in parallel at the lower end of the high-rate model lithium battery and the side end of the power supply voltage stabilizer plate. The GPS antenna is extended and suspended at the top through one of the rotating shafts. Among them, the IMU integrated sensor is integrated by a six-axis motion sensor, a magnetometer and a barometer.
进一步的,所述数据处理系统由图像处理模块、图像采集装置和无线信号接发器组成,用于获取并处理图像数据、建立与移动终端系统的通信以及发送运动控制指令给飞行控制系统;图像采集装置包括三轴自稳云台、高清摄像头、红外补光灯和红外热像仪,用于采集图像数据;图像处理模块设置在第二块四边形的上端和第一块四边形的下面;第二块四边形的下端设有一个支架,支架放置三轴自稳云台、高清摄像头和红外热像仪。图像处理模块使用深度学习图像处理算法(卢健,何金鑫,李哲, 等.基于深度学习的目标检测综述[J].电光与控制,2020,27(5):56-63. DOI:10.3969/j.issn.1671-637X.2020.05.012.)处理图像数据,通过无线信号接发器建立与移动终端系统的通信,以及通过串口发送运动控制指令给飞行控制系统;数据处理系统通过图像采集装置获取图像数据并处理后能够自主识别可疑人员集群和自动寻找可疑目标或者犯罪分子,发现目标后能够控制飞行控制系统对目标进行自动跟踪,此外,还能通过红外热像仪自动测量人体目标额头温度。Further, the data processing system is composed of an image processing module, an image acquisition device and a wireless signal transceiver, and is used for acquiring and processing image data, establishing communication with the mobile terminal system, and sending motion control instructions to the flight control system; image The acquisition device includes a three-axis self-stabilizing pan/tilt, a high-definition camera, an infrared fill light and an infrared thermal imager, which are used to collect image data; the image processing module is arranged on the upper end of the second quadrilateral and below the first quadrilateral; the second quadrilateral The lower end of the block quadrilateral is provided with a bracket on which a three-axis self-stabilizing pan/tilt, a high-definition camera and an infrared thermal imager are placed. The image processing module uses deep learning image processing algorithms (Lu Jian, He Jinxin, Li Zhe, et al. Overview of object detection based on deep learning [J]. Electro-Optics and Control, 2020, 27(5): 56-63. DOI: 10.3969 /j.issn.1671-637X.2020.05.012.) Process image data, establish communication with the mobile terminal system through the wireless signal transceiver, and send motion control commands to the flight control system through the serial port; the data processing system collects images through the After the device obtains and processes image data, it can autonomously identify suspicious person clusters and automatically find suspicious targets or criminals. After finding the target, it can control the flight control system to automatically track the target. In addition, it can also automatically measure the forehead of the human target through an infrared thermal imager. temperature.
进一步的,所述移动终端系统由智能终端和无线信号接发器组成,用于完成人机交互和与数据处理系统之间的信息传输。所述人机交互包括通过输入设备获取深度学习目标图像模型、采集好的目标人脸图像数据和来自操作人员对无人机的控制指令,通过输出设备把来自数据处理系统的图像数据、图像处理后的数据和无人机状态数据展示给操作人员,并通过震动和播放提示音提醒操作人员。深度学习目标图像模型是经过深度学习图像处理算法对目标图像进行深度学习后获得的含有目标图像深度特征的卷积神经网络模型(周俊宇, 赵艳明. 卷积神经网络在图像分类和目标检测应用综述[J]. 计算机工程与应用, 2017, 53(013):34-41.),能够用于对目标的检测识别。与数据处理系统的信息传输包括通过无线信号接发器将人机交互中通过输入设备获取的信息发送到数据处理系统和接收来自数据处理系统的图像数据、图像处理后的数据以及无人机状态数据。Further, the mobile terminal system is composed of an intelligent terminal and a wireless signal transceiver, which is used to complete human-computer interaction and information transmission with the data processing system. The human-computer interaction includes acquiring the deep learning target image model, the collected target face image data and the control instructions from the operator to the drone through the input device, and processing the image data and image data from the data processing system through the output device. The latter data and UAV status data are displayed to the operator, and the operator is reminded by vibration and playing a prompt tone. The deep learning target image model is a convolutional neural network model containing the depth features of the target image obtained after deep learning of the target image by the deep learning image processing algorithm (Zhou Junyu, Zhao Yanming. A review of the application of convolutional neural networks in image classification and target detection[ J]. Computer Engineering and Application, 2017, 53(013):34-41.), which can be used for target detection and recognition. The information transmission with the data processing system includes sending the information obtained through the input device in the human-computer interaction to the data processing system through the wireless signal transceiver and receiving the image data from the data processing system, the data after image processing and the state of the drone data.
进一步的,所述能够通过图像处理模块对目标进行自主离线识别并跟踪。Further, the image processing module can independently identify and track the target offline.
进一步的,所述能够通过图像处理模块对目标进行自主离线识别并跟踪具体为:Further, the ability to autonomously identify and track the target offline through the image processing module is specifically:
(1)深度学习目标图像模型的训练:采用深度学习图像处理算法搭建的卷积神经网络对含有目标图像的目标图像训练集进行深度学习以获得含有目标图像深度特征的深度学习目标图像模型,即含有目标图像深度特征的网络模型;(1) Training of the deep learning target image model: The convolutional neural network built by the deep learning image processing algorithm performs deep learning on the target image training set containing the target image to obtain the deep learning target image model containing the depth features of the target image, namely A network model containing the depth features of the target image;
(2)目标识别:图像处理模块加载训练后得到的深度学习目标图像模型,同时图像采集装置实时获取视频数据并传输至图像处理模块,由图像处理模块将视频数据转化为每一帧的图片并组成图片数据集,然后图像处理模块对图片数据集进行噪声滤除、色彩格式转换以获得含有目标特征的图片数据集,该图片数据集经过深度学习目标图像模型的处理后获得的每张图片含有目标的置信度以及该目标图像在原图像中的像素高度、宽度以及像素坐标信息;(2) Target recognition: The image processing module loads the deep learning target image model obtained after training, and the image acquisition device acquires video data in real time and transmits it to the image processing module. A picture data set is formed, and then the image processing module performs noise filtering and color format conversion on the picture data set to obtain a picture data set containing target features. The confidence of the target and the pixel height, width and pixel coordinate information of the target image in the original image;
(3)目标跟踪:目标跟踪算法采用模板匹配法,特征模板是目标识别后含有高置信度的目标图像,匹配的方法是平方差匹配法或者相关系数匹配法,即特征模板与被搜索图像的相关性采用图像数据矩阵之间的平方差或者相关系数衡量。(3) Target tracking: The target tracking algorithm adopts the template matching method. The feature template is the target image with high confidence after target recognition. The matching method is the square difference matching method or the correlation coefficient matching method, that is, the feature template and the searched image are matched. Correlation is measured by the squared difference or correlation coefficient between image data matrices.
进一步的,所述在进行目标跟踪时使用的模板匹配法的具体过程为:Further, the specific process of the template matching method used in the target tracking is:
当图像处理模块获得目标特征模板后在此时图像采集装置采集的第一帧图片中搜索与特征模板最为相似的图像,并将该图像在该图片的中心像素坐标以及像素尺寸输出至飞行控制系统,由飞行控制系统利用该图像的像素坐标控制飞机或三轴自稳云台运动至目标图像中心保持在画面中间位置的状态,利用目标图像的像素尺寸的相对大小控制飞机与目标之间保持合适的距离,图像处理模块采用卡尔曼滤波器(Welch G . Kalman Filter.[J]. Siggraph Tutorial, 2001.)或者粒子滤波器(Gool, Luc. Object Tracking withan Adaptive Color-Based Particle Filter[C]// 2002:353-360.)来预测下一帧目标在图像中的位置;After the image processing module obtains the target feature template, it searches for the image most similar to the feature template in the first frame of pictures collected by the image acquisition device, and outputs the center pixel coordinates and pixel size of the image to the flight control system. , the flight control system uses the pixel coordinates of the image to control the aircraft or the three-axis self-stabilizing pan/tilt to move to the state where the center of the target image remains in the middle of the screen, and uses the relative size of the pixel size of the target image to control the distance between the aircraft and the target. The image processing module adopts Kalman filter (Welch G. Kalman Filter.[J]. Siggraph Tutorial, 2001.) or particle filter (Gool, Luc. Object Tracking withan Adaptive Color-Based Particle Filter[C]/ / 2002:353-360.) to predict the position of the target in the image for the next frame;
预测下一帧目标位置的同时,图像处理模块将第一帧搜索到的图像更新为新的特征模板以便搜索下一帧图片中的目标,若第一帧图片中没有搜索到目标图像,则继续利用之前的特征模板搜索下一帧图片,直至在设定的一定帧数量的图片中都没有搜索到与特征模板相似的图像时,重新对当前帧的图片进行目标识别以获得新的目标特征模板;While predicting the target position of the next frame, the image processing module updates the image searched in the first frame to a new feature template to search for the target in the next frame of pictures. If the target image is not searched in the first frame of pictures, continue Use the previous feature template to search for the next frame of pictures, until no image similar to the feature template is found in the set number of pictures, re-target the current frame of the picture to obtain a new target feature template ;
为解决随着时间的推移会产生跟踪漂移,即所跟踪的目标图像位置与实际目标的位置有一定的偏移,图像处理模块使用目标跟踪算法处理图片一段时间后将自动重新检测目标。In order to solve the tracking drift over time, that is, the position of the tracked target image has a certain deviation from the actual target position, the image processing module uses the target tracking algorithm to process the picture for a period of time and will automatically re-detect the target.
自动寻找可疑人员或者犯罪分子并进行自动跟踪、自动对人体目标进行体温安全检测和自主检测可疑人员集群的功能的实现步骤如下:The steps for realizing the functions of automatically finding suspicious persons or criminals and automatically tracking them, automatically performing body temperature safety detection on human targets, and autonomously detecting clusters of suspicious persons are as follows:
(1)可选择使用高性能计算机运行深度学习图像处理算法,学习大量人体目标的图像后获得人体目标的深度学习目标图像模型,然后同时将人体目标的深度学习目标图像模型和可疑人员或者犯罪分子等目标的人脸数据由移动终端系统上传至无人机的数据处理系统;(1) You can choose to use a high-performance computer to run a deep learning image processing algorithm, learn a large number of images of human targets to obtain a deep learning target image model of the human target, and then combine the deep learning target image model of the human target with suspicious persons or criminals at the same time. The face data of the target is uploaded from the mobile terminal system to the data processing system of the UAV;
(2)无人机在工作时,图像采集装置获取图像后由图像处理模块处理识别图像中的人体目标数量,若人体目标数量过多,则判别为可疑人员集群;(2) When the UAV is working, after the image acquisition device obtains the image, the image processing module processes and identifies the number of human targets in the image. If the number of human targets is too large, it is judged as a cluster of suspicious persons;
(3)识别为人体目标后,无人机主动运动至适合采集人脸图像的位置,利用红外热像仪获取人体额头温度,利用高清摄像头采集人脸图像数据,若人体额头温度超出设定阈值,将发出警报,并将采集的人脸图像数据和无人机的经纬坐标数据发送至移动终端系统,判断人体温度是否正常的同时利用上传的可疑人员或者犯罪分子的人脸数据和采集的图像数据进行比对以进行人脸识别,人脸识别后确定为可疑人员或者犯罪分子,无人机将采集的图像数据和此时无人机的经纬坐标数据发送至移动终端系统并开始自动跟踪可疑人员或者犯罪分子,若人脸识别后确定为非可疑人员或者犯罪分子,将对下一个人员目标进行同样的识别。(3) After being identified as a human target, the drone actively moves to a position suitable for collecting face images, uses an infrared thermal imager to obtain the temperature of the human forehead, and uses a high-definition camera to collect face image data. If the temperature of the human forehead exceeds the set threshold , will issue an alarm, and send the collected face image data and the latitude and longitude coordinate data of the drone to the mobile terminal system to judge whether the body temperature is normal, and use the uploaded face data and collected images of suspicious persons or criminals. The data is compared for face recognition. After face recognition, it is determined as a suspicious person or criminal. The drone sends the collected image data and the longitude and latitude coordinate data of the drone to the mobile terminal system and starts to automatically track suspicious persons. If a person or criminal is identified as a non-suspicious person or criminal after face recognition, the same identification will be performed on the next person target.
本发明的有益效果是:本技术采用离线工作方式,不需要将数据量庞大、处理复杂的图像数据通过图传传输至地面站进行处理,而是利用无人机自带的图像处理模块完成对图像中目标的特征识别,既能够避免数据无线传输时面临的丢失数据、延迟高等诸多问题,又使无人机自身就可实时完成本发明所需实现的功能,提高了无人机系统的灵敏度、响应速度和场景适应能力。本发明的图像采集装置同时使用了高清摄像头和红外热像仪,夜晚时开启红外补光灯,可以通过高清摄像头实现无人机在明亮与黑暗环境下的工作,并且还能通过红外热像仪获取目标温度,极大地降低了环境对无人机工作的影响。无人机采用移动终端系统上传深度学习目标图像模型和可疑人员或者犯罪分子的人脸图像数据,操作简便,可适用于多种场景,实用性强。在无人机获得多个深度学习目标图像模型后,无人机将自动寻找目标,提高了无人机的自动化和智能化。该无人机对人员的目标人脸识别的过程中,采用主动运动至合适位置以获取人脸图像数据的实施方法,提高了发现罪犯等危险目标的几率。The beneficial effects of the present invention are: the technology adopts the offline working mode, and does not need to transmit the image data with huge amount of data and complex processing to the ground station for processing through image transmission, but uses the image processing module of the unmanned aerial vehicle to complete the image processing module. The feature recognition of the target in the image can not only avoid many problems such as lost data and delay in the wireless data transmission, but also enable the drone itself to complete the functions required by the present invention in real time, and improve the sensitivity of the drone system. , response speed and scene adaptability. The image acquisition device of the present invention uses a high-definition camera and an infrared thermal imager at the same time, and turns on the infrared fill light at night, so that the high-definition camera can be used to realize the operation of the drone in bright and dark environments, and the infrared thermal imager can also be used. Obtaining the target temperature greatly reduces the impact of the environment on the drone's work. The UAV adopts the mobile terminal system to upload the deep learning target image model and the face image data of suspicious persons or criminals. After the UAV obtains multiple deep learning target image models, the UAV will automatically find the target, which improves the automation and intelligence of the UAV. In the process of recognizing the target face of the person, the UAV adopts the implementation method of actively moving to a suitable position to obtain the face image data, which improves the probability of finding dangerous targets such as criminals.
附图说明Description of drawings
图1为本发明的无人机系统结构图;Fig. 1 is the unmanned aerial vehicle system structure diagram of the present invention;
图2为本发明的无人机结构主视图;Fig. 2 is the front view of the UAV structure of the present invention;
图3为本发明的无人机结构仰视图;3 is a bottom view of the UAV structure of the present invention;
图4为本发明的无人机结构左视图;Fig. 4 is the left side view of the UAV structure of the present invention;
图5为本发明的无人机结构右视图;Fig. 5 is the right side view of the UAV structure of the present invention;
图6为本发明的无人机安全巡查流程图;Fig. 6 is the UAV safety inspection flow chart of the present invention;
图7为本发明的运动控制原理图;Fig. 7 is the motion control principle diagram of the present invention;
附图标记说明:图2、3、4、5中主视方向为机身侧方即摄像头所对方向,1—动力系统;2—飞行控制系统;3—数据处理系统;4—移动终端系统;5—机架;6—机架底座;7—起落架,8-旋转轴,9-第一块四边形板,10、第二块四边形板;101—无刷电机;102—桨叶;103—高倍率航模锂电池;104—电源稳压板;201—嵌入式飞行微控制器;202—六轴运动传感器、磁力计和气压计组成的IMU集成传感器;203—单目超声波传感器;204—光流传感器;205—GPS天线;301—图像处理模块;302—三轴自稳云台;303—高清摄像头;304—红外热像仪。Description of reference numerals: In Figures 2, 3, 4, and 5, the main view direction is the side of the fuselage, that is, the direction facing the camera, 1—power system; 2—flight control system; 3—data processing system; 4—mobile terminal system ;5—frame; 6—frame base; 7—landing gear, 8—rotating shaft, 9—first quadrilateral plate, 10, second quadrilateral plate; 101—brushless motor; 102—blade; 103 —High rate lithium battery for model aircraft; 104—Power voltage regulator board; 201—Embedded flight microcontroller; 202—IMU integrated sensor composed of six-axis motion sensor, magnetometer and barometer; 203—Monocular ultrasonic sensor; 204— Optical flow sensor; 205—GPS antenna; 301—Image processing module; 302—Three-axis self-stabilizing pan/tilt; 303—HD camera; 304—Infrared thermal imager.
具体实施方式Detailed ways
下面结合附图和实例对本发明作进一步详细的说明。如图1-7所示,本发明设计的基于机器视觉的公共安全自主巡查四旋翼无人机,包括动力系统1、飞行控制系统2、数据处理系统3和移动终端系统4。四旋翼无人机机身结构自上而下由机架5、机架底座6、起落架7组成,机架底座6为二块四边形板,四根旋转轴8穿插在二块四边形板的四端角落上,第一块四边形板9悬空在四根旋转轴8上;四根旋转轴8顶端架设有机架5,机架5为X型机架,四根旋转轴8底端连接第二块四边形板10,第二块四边形板10下端连接起落架7;动力系统1和飞行控制系统2设置在机架底座6的第一块四边形9上端,数据处理系统3设置在机架底座6的第一块四边形9下端。The present invention will be described in further detail below in conjunction with the accompanying drawings and examples. As shown in FIGS. 1-7 , the machine vision-based public safety autonomous inspection quadrotor UAV designed by the present invention includes a power system 1 , a flight control system 2 , a data processing system 3 and a mobile terminal system 4 . The fuselage structure of the quadrotor UAV is composed of a
动力系统1由无刷电机101、桨叶102、高倍率航模锂电池103和电源稳压板104组成;机架5的四个端部的分别安装有一个无刷电机101,每个无刷电机101上固定有桨叶102,高倍率航模锂电池103设置在机架底座6的第一块四边形板9上端,电源稳压板104垂直设置在第一块四边形9和第二块四边形10的侧端,高度和第一块四边形9平衡。The power system 1 is composed of a
动力系统1用于为无人机提供飞行动力,以及为飞行控制系统2和数据处理系统3提供电源。它包括动力装置、高倍率航模锂电池103、桨叶102、电源稳压板104。其中,动力装置可以是高倍率航模电池103搭配无刷电机101和电调的电力驱动系统,也可以是采用直驱式油动系统,具体由油箱、发动机、供油器、变桨距装置、舵机组成,电源稳压板104用于将航模高倍率锂电池103提供的电源电压稳定至5V直流电压以驱动飞行控制系统2和数据处理系统3。The power system 1 is used to provide flight power for the UAV, as well as power supply for the flight control system 2 and the data processing system 3 . It includes a power unit, a high-
飞行控制系统2由嵌入式飞行微控制器201、IMU集成传感器202、单目超声波传感器203、光流传感器204、GPS天线205、声学或光学提示装置组成;机架5上部悬空设有嵌入式飞行微控制器201,嵌入式飞行微控制器201上设有IMU集成传感器202,在高倍率航模锂电池103的下端和电源稳压板104侧端并行设有单目超声波传感器203和光流传感器204,GPS天线205通过其中一根旋转轴8加长悬空设置在最顶端,其中IMU集成传感器202由六轴运动传感器、磁力计和气压计集成。The flight control system 2 consists of an embedded
飞行控制系统2用于控制四旋翼无人机的飞行,并为无人机在获取目标图像时提供稳定的平台。它包含嵌入式飞行微控制器201、高度测量装置、姿态传感器、数据接收发送装置、水平位移传感器、导航系统、声学或光学提示装置。飞行微控制器可采用FPGA basedplatforms(Field-Programmable Gate Array),即现场可编程门阵列平台,或者ARM-based platforms,或者Atmel-based platforms或者Raspberry Pi based platforms。高度测量装置是可以测量无人机高度的装置,可以采用超声波距离传感器,气压计,激光测距装置或者GPS装置,也可以采用多种方式结合获取高度信息的装置。姿态传感器可以是由六轴加速度计和三轴陀螺仪组成的九轴姿态传感器。数据接收发送装置用于接收无人机运动控制指令,可以采用WiFi协议或者蓝牙协议的无线电接发装置,也可采用数传电台通信装置或者移动通信装置。水平位移传感器用于获取无人机水平位移数据,以保持无人机的水平稳定,可以采用光流传感器。导航系统用于获取无人机的经纬度、海拔高度信息,采用GPS导航系统。声学或光学提示装置用于发出提示性的声学或者光学信息的装置,可以是光学输出装置或者声学输出装置,也可以是两种方式结合的装置,这里采用扬声器加灯光的输出装置。The flight control system 2 is used to control the flight of the quadrotor UAV and provide a stable platform for the UAV to acquire target images. It includes an embedded
数据处理系统3由图像处理模块301、三轴自稳云台302、高清摄像头303和红外热像仪304组成;图像处理模块301设置在第二块四边形10的上端和第一块四边形9的下面;第二块四边形10的下端设有一个支架,支架放置三轴自稳云台302、高清摄像头303和红外热像仪304。The data processing system 3 is composed of an image processing module 301 , a three-axis self-stabilizing pan/
数据处理系统3用于获取并处理图像数据、建立与移动终端系统4的通信以及发送运动控制指令给飞行控制系统2。该系统包括图像处理模块301、图像采集装置、无线信号接发器,其中无线信号接发器可采用飞行控制系统中所述的数据接收发送装置方案。图像处理模块301用于处理图像数据以及建立与移动终端系统4的通信,具体可以采用英伟达的Jetson AI超级计算机、百度飞桨的EdgeBoard深度学习计算卡、Arduino开源硬件板、华为海思人工智能AI平台或深鉴科技DP-8000 AI开发板。图像采集装置包括自稳云台、高清摄像头和红外补光灯和红外热像仪,其中自稳云台用于给摄像头和红外热像仪提供稳定平台,可采用多轴云台,摄像头可采用单目摄像头也可采用多目摄像头,夜晚时开启红外补光灯,无论环境明亮或者黑暗都能够通过高清摄像头获得图像数据,从而实现无人机的全天工作,红外热像仪可获得带有温度数据的目标图像。The data processing system 3 is used for acquiring and processing image data, establishing communication with the mobile terminal system 4 and sending motion control commands to the flight control system 2 . The system includes an image processing module 301, an image acquisition device, and a wireless signal transceiver, wherein the wireless signal transceiver can use the data receiving and sending device solution described in the flight control system. The image processing module 301 is used to process image data and establish communication with the mobile terminal system 4. Specifically, NVIDIA's Jetson AI supercomputer, Baidu's EdgeBoard deep learning computing card, Arduino open source hardware board, Huawei HiSilicon artificial intelligence AI can be used. Platform or Shenjian Technology DP-8000 AI development board. The image acquisition device includes a self-stabilizing gimbal, a high-definition camera, an infrared fill light, and an infrared thermal imager. The self-stabilizing gimbal is used to provide a stable platform for the camera and the infrared thermal imager. The monocular camera can also use a multi-eye camera, and the infrared fill light can be turned on at night. No matter the environment is bright or dark, the image data can be obtained through the high-definition camera, so that the drone can work all day. Target image for temperature data.
移动终端系统4由智能终端、无线信号接发器组成,用于完成人机交互和与数据处理系统3之间的信息传输。无线信号接发器既能接收机载数据处理系统3发送的数据也能够发送操作人员的操作数据,可以采用WiFi协议的无线电接发装置,也可以采用数传电台通信装置或者移动通信装置。智能终端可以是安装有智能应用程序的电脑或者智能手机,带有输入输出设备。智能应用程序将提供操作界面。具体实施过程为移动终端系统4通过无线信号接发器接收到数据处理系统3发送的数据后将通过应用程序显示给操作人员,并通过震动和播放提示音提醒操作人员,操作人员接收信息后将是否持续跟踪的指令通过应用程序传输至智能终端,智能终端将操作指令数据通过无线信号接发器发送到机载的数据处理系统3。The mobile terminal system 4 is composed of intelligent terminals and wireless signal transceivers, and is used to complete human-computer interaction and information transmission with the data processing system 3 . The wireless signal transceiver can not only receive the data sent by the on-board data processing system 3 but also send the operation data of the operator. The smart terminal can be a computer or a smart phone installed with smart applications, with input and output devices. Smart applications will provide the operator interface. The specific implementation process is that after the mobile terminal system 4 receives the data sent by the data processing system 3 through the wireless signal transceiver, it will display it to the operator through the application program, and remind the operator by vibrating and playing the prompt sound. The instruction of whether to continue tracking is transmitted to the intelligent terminal through the application program, and the intelligent terminal sends the operation instruction data to the airborne data processing system 3 through the wireless signal transceiver.
本发明工作时的流程如下:The process flow when the present invention works is as follows:
(1)由操作人员通过移动终端设备输入目标人脸图像数据、深度学习目标图像模型,并制定本次飞行任务,包括规划飞行航线、设定目标和设置任务内容及优先级,任务内容有检测在逃罪犯识别跟踪、可疑人员集群识别、异常体温人员识别跟踪,数据输入并设定完毕后,移动终端系统4将此次任务信息发送至数据处理系统3。(1) The operator inputs the target face image data and the deep learning target image model through the mobile terminal device, and formulates the flight mission, including planning the flight route, setting the target, and setting the task content and priority. The task content has detection After identification and tracking of escaped criminals, cluster identification of suspicious persons, identification and tracking of persons with abnormal body temperature, and data input and setting, the mobile terminal system 4 sends the task information to the data processing system 3 .
(2)数据处理系统3接收到此次任务信息并得到启动指令后将开始系统的自检,若自检错误,数据处理系统3将错误信息反馈至移动终端系统4;若自检完成并无误,无人机将自动启动然后按照设置的航线开始飞行。(2) The data processing system 3 will start the self-check of the system after receiving the task information and getting the start instruction. If the self-check is wrong, the data processing system 3 will feed back the error information to the mobile terminal system 4; if the self-check is completed without error , the drone will automatically start and start flying according to the set route.
(3)无人机启动后,数据处理系统3将航线数据发送至飞行控制系统2,飞行控制系统2将按照设定航线控制无人机飞行。(3) After the UAV is started, the data processing system 3 sends the route data to the flight control system 2, and the flight control system 2 will control the UAV to fly according to the set route.
(4)飞行过程中,数据处理系统3采集周围环境的图像数据,并利用深度学习图像处理算法识别图像中的人体目标,并计算出人员密度,判断是否存在可疑人员集群,并将识别信息发送至移动终端系统4,然后由操作人员确认是否可疑人员集群,若操作人员不在线,无人机直接标定该集群目标的GPS位置(即此时无人机的经纬度位置数据),保存该集群目标的图像信息,然后发送给移动智能终端,执行完成后继续按照设置的航线飞行。(4) During the flight, the data processing system 3 collects the image data of the surrounding environment, and uses the deep learning image processing algorithm to identify the human target in the image, and calculates the density of people, judges whether there is a suspicious person cluster, and sends the identification information. Go to the mobile terminal system 4, and then the operator confirms whether there is a cluster of suspicious people. If the operator is not online, the drone directly calibrates the GPS position of the cluster target (that is, the longitude and latitude position data of the drone at this time), and saves the cluster target. Then send the image information to the mobile intelligent terminal, and continue to fly according to the set route after the execution is completed.
(5)若设置了在逃罪犯识别跟踪和异常体温人员识别跟踪任务,数据处理系统3识别为人体目标后,也会计算出移动到适合获取人脸数据位置的最佳路径,把飞行控制指令发送至飞行控制系统2,待飞行至适合获取人脸数据的位置后,利用移动终端系统4上传的目标人脸数据与获取的目标人脸数据进行对比,若确认目标为在逃罪犯,数据处理系统3将发送目标图像处理结果至移动终端系统4以提醒操作人员,若操作人员离线,无人机将自动跟踪罪犯目标直至操作人员发出取消指令;扫描目标人脸数据的同时,红外热像仪也会获取人脸红外图像数据,并测量出人体额头温度,若温度异常,数据处理系统4也会把目标图像数据、图像处理后的数据、人体额头温度发送至移动终端系统4,是否对异常额温目标进行跟踪可以在任务设置中进行设置,也可以在移动终端系统4实时设置。(5) If the tasks of identifying and tracking fugitives and people with abnormal body temperature are set, after the data processing system 3 recognizes it as a human target, it will also calculate the best path to move to the position suitable for obtaining face data, and send flight control instructions to The flight control system 2, after flying to a position suitable for obtaining face data, uses the target face data uploaded by the mobile terminal system 4 to compare with the obtained target face data. If it is confirmed that the target is a fugitive criminal, the data processing system 3 will Send the target image processing result to the mobile terminal system 4 to remind the operator. If the operator is offline, the drone will automatically track the criminal target until the operator issues a cancellation command; while scanning the target's face data, the infrared thermal imager will also acquire The infrared image data of the face, and the temperature of the forehead of the human body is measured. If the temperature is abnormal, the data processing system 4 will also send the target image data, the data after image processing, and the temperature of the human forehead to the mobile terminal system 4. Whether the abnormal forehead temperature target is detected Tracking can be set in task settings, or can be set in real time in the mobile terminal system 4 .
(6)在按照航线执行完任务后,无人机将自动返航。(6) After completing the task according to the route, the drone will automatically return to home.
一种基于机器视觉的公共安全自主巡查四旋翼无人机运行流程如附图6所示。无人机工作时,首先打开电源稳压板104上的电源开关,然后无人机开始自检以及各系统的初始化,此时,三轴自稳云台302也回到初始位置。初始化完成后,飞行控制系统2会以灯光和扬声器提示音提示,在移动终端系统4确认开始后,无人机自动起飞至设定的安全高度,若已经开启自动模式并已经上传工作数据,无人机将开始控制三轴自稳云台302开始寻找目标,否则悬停在空中等待指令。开启自动识别跟踪模式后,数据处理系统3将控制三轴自稳云台302转动寻找目标,高清摄像头303所捕获的图像将实时传送给图像处理模块301,图像处理模块301接收到图像信息后将使用训练好的模型对信息进行检测,当识别到图片中的目标时,根据前后帧图像中目标的中心位置的像素坐标以及目标的像素大小信息迅速建立目标运动模型,并通过卡尔曼滤波器或者粒子滤波器等滤波器预测好目标的下一帧像素坐标位置,并计算出无人机运动控制数据,然后发送至嵌入式飞行微控制器201。嵌入式飞行微控制器201迅速控制机身至能够追踪目标获取目标图像信息的合适位置。高清摄像头303以及红外热像仪304的图像数据获取后,图像处理模块301将所获得的图像信息进行特征识别等处理,判断人员是否为嫌疑人或者在逃罪犯;是否为可疑人员集群;人体温度是否正常;人体温度由红外热像仪304测量人脸额头温度获得。若处理的结果有价值,如识别到罪犯类可疑目标、检测可疑危险集群行为、发现体温不正常人员,数据处理系统3将发送可疑目标图像数据、图像处理模块301的图像识别数据、无人机GPS位置数据至移动终端系统4。移动终端系统4接收数据后,通过应用界面将数据呈现给操作人员,由操作人员确认是否进行跟踪。若无操作人员指令,对于在逃罪犯,无人机将进行实时跟踪,并持续将数据传送至移动终端系统4。The operation process of a public safety autonomous inspection quadrotor UAV based on machine vision is shown in Figure 6. When the drone is working, first turn on the power switch on the power supply
四旋翼无人机飞行器飞行控制原理如附图7所示,飞行控制系统2由串级双闭环PID控制器组成。外环为位置反馈,能够反馈出当前偏航角度,外环控制的目的是为了达到期望角度,外环PID的输出作为角速度给内环作为输入,即内环的期望值。内环为姿态反馈,其目的为达到期望的角速度,即外环的输入,内环的输出为无刷电机的转速控制参数。有这两个环的控制,无人机的运动以及定点将会平稳很多。双闭环PID控制器输出公式如下,out(t)为连续系统的PID控制器输出公式,out(k)为离散后的系统的PID控制器的输出公式,其中err(k)为k时刻对应的期望值与实际值的偏差,Kp、Ki、Kd分别为比例系数、积分系数、微分系数,T为数据更新时间周期:The flight control principle of the quadrotor unmanned aerial vehicle is shown in FIG. 7 , and the flight control system 2 is composed of a cascade double closed-loop PID controller. The outer loop is position feedback, which can feedback the current yaw angle. The purpose of the outer loop control is to achieve the desired angle. The output of the outer loop PID is used as the angular velocity to the inner loop as the input, that is, the expected value of the inner loop. The inner loop is attitude feedback, and its purpose is to achieve the desired angular velocity, that is, the input of the outer loop, and the output of the inner loop is the speed control parameter of the brushless motor. With the control of these two rings, the movement and positioning of the drone will be much smoother. The output formula of the double closed-loop PID controller is as follows, out(t) is the output formula of the PID controller of the continuous system, out(k) is the output formula of the PID controller of the discrete system, and err(k) is the corresponding value at time k The deviation between the expected value and the actual value, Kp, Ki, Kd are the proportional coefficient, integral coefficient, and differential coefficient, respectively, and T is the data update time period:
具体实施时,四旋翼无人机通过飞行控制系统2的姿态传感器获得运动数据以及姿态数据;在低空时,无人机离地高度数据由多种高度测量装置获得的数据经过卡尔曼滤波器后得出,高空时,由GPS模块获得;无人机安全巡查时,由数据处理系统3中的图像处理模块301利用训练好的模型对视频每帧的图像信息进行实时处理,处理后将得到图像中人体数量和人员密度,模型由卷积神经网络对大量的已标记的目标图像图像训练集进行深度学习后得出,人员密度由人体数量以及人体与无人机相对距离进行计算得出,其中人体与无人机相对距离由人体所占像素数量与图像总像素数量相除得出;无人机通过人员密度将判断出是否存在危险人员集群行为;同时,无人机将通过高清摄像头303对人体目标进行人脸识别,人脸模型数据由操作人员采集后上传至移动终端系统4,移动终端系统4再发送至数据处理系统3,人脸识别的同时将通过红外热像仪304获得人体目标的头部温度数据,从而实现在逃罪犯的识别以及对异常体温人员的识别。环境明亮或黑暗时通过高清摄像头303和红外补光灯获得图像数据,以实现无人机的全天工作。During the specific implementation, the quadrotor UAV obtains motion data and attitude data through the attitude sensor of the flight control system 2; at low altitude, the UAV’s height above the ground is obtained from the data obtained by various altitude measurement devices after passing through the Kalman filter. It can be concluded that at high altitude, it is obtained by the GPS module; when the drone is safely inspected, the image processing module 301 in the data processing system 3 uses the trained model to process the image information of each frame of the video in real time, and the image will be obtained after processing. The number of human bodies and the density of people in the model are obtained after deep learning of a large number of labeled target image training sets by convolutional neural networks. The density of people is calculated from the number of human bodies and the relative distance between the human body and the drone, where The relative distance between the human body and the drone is obtained by dividing the number of pixels occupied by the human body and the total number of pixels in the image; the drone will judge whether there is a dangerous group of people through the density of people; The human body target is subjected to face recognition, and the face model data is collected by the operator and uploaded to the mobile terminal system 4, and the mobile terminal system 4 is then sent to the data processing system 3. At the same time as the face recognition, the human body target will be obtained through the infrared
以上为本发明具体实施的较佳实例,但本发明不局限于该实例和附图所公开的内容。故凡是不脱离本发明所公开的精神下完成的等效或修改,都落入本发明保护范围。The above is a preferred example of the specific implementation of the present invention, but the present invention is not limited to the content disclosed in the example and the accompanying drawings. Therefore, all equivalents or modifications accomplished without departing from the spirit disclosed in the present invention fall into the protection scope of the present invention.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010691797.XA CN111824406A (en) | 2020-07-17 | 2020-07-17 | A public safety autonomous inspection quadrotor UAV based on machine vision |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010691797.XA CN111824406A (en) | 2020-07-17 | 2020-07-17 | A public safety autonomous inspection quadrotor UAV based on machine vision |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111824406A true CN111824406A (en) | 2020-10-27 |
Family
ID=72924302
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010691797.XA Pending CN111824406A (en) | 2020-07-17 | 2020-07-17 | A public safety autonomous inspection quadrotor UAV based on machine vision |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111824406A (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112394746A (en) * | 2020-11-23 | 2021-02-23 | 武汉科技大学 | Intelligent epidemic prevention unmanned aerial vehicle based on machine learning and control method thereof |
| CN113052115A (en) * | 2021-04-06 | 2021-06-29 | 合肥工业大学 | Unmanned aerial vehicle airborne vital sign detection method based on video method |
| CN113111715A (en) * | 2021-03-13 | 2021-07-13 | 浙江御穹电子科技有限公司 | Unmanned aerial vehicle target tracking and information acquisition system and method |
| CN113119082A (en) * | 2021-03-18 | 2021-07-16 | 深圳市优必选科技股份有限公司 | Visual recognition circuit, visual recognition device, and robot |
| CN113268071A (en) * | 2021-01-28 | 2021-08-17 | 北京理工大学 | Unmanned aerial vehicle tracing method and system based on multi-sensor fusion |
| CN113306741A (en) * | 2021-04-16 | 2021-08-27 | 西安航空职业技术学院 | External winding inspection unmanned aerial vehicle and method based on deep learning |
| CN113625777A (en) * | 2021-09-22 | 2021-11-09 | 福建江夏学院 | Multifunctional flight control circuit and method based on UAV |
| CN115854791A (en) * | 2022-06-14 | 2023-03-28 | 北京中安航信科技有限公司 | Directional sound wave driving and separating system and method for unmanned aerial vehicle |
| CN115924144A (en) * | 2023-01-04 | 2023-04-07 | 电子科技大学 | A quadrotor UAV that can deploy multiple computing and sensing devices inside the body |
| CN116778360A (en) * | 2023-06-09 | 2023-09-19 | 北京科技大学 | A ground target positioning method and device for flapping-wing flying robots |
| CN118413561A (en) * | 2024-07-02 | 2024-07-30 | 舟山中远海运重工有限公司 | Unmanned aerial vehicle inspection system based on deep intelligent learning algorithm and data processing method |
| CN118427374A (en) * | 2024-07-05 | 2024-08-02 | 北京领云时代科技有限公司 | Heterogeneous unmanned aerial vehicle collaborative search system and method based on reinforcement learning |
Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106570892A (en) * | 2015-08-18 | 2017-04-19 | 航天图景(北京)科技有限公司 | Moving-target active tracking method based on edge enhancement template matching |
| CN106774436A (en) * | 2017-02-27 | 2017-05-31 | 南京航空航天大学 | The control system and method for the rotor wing unmanned aerial vehicle tenacious tracking target of view-based access control model |
| WO2017115120A1 (en) * | 2015-12-29 | 2017-07-06 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for automated aerial system operation |
| CN107817820A (en) * | 2017-10-16 | 2018-03-20 | 复旦大学 | A kind of unmanned plane autonomous flight control method and system based on deep learning |
| CN107851358A (en) * | 2015-07-09 | 2018-03-27 | 诺基亚技术有限公司 | Monitoring |
| US20180107874A1 (en) * | 2016-01-29 | 2018-04-19 | Panton, Inc. | Aerial image processing |
| CN109324638A (en) * | 2018-12-05 | 2019-02-12 | 中国计量大学 | Four-rotor UAV target tracking system based on machine vision |
| US20190057244A1 (en) * | 2017-08-18 | 2019-02-21 | Autel Robotics Co., Ltd. | Method for determining target through intelligent following of unmanned aerial vehicle, unmanned aerial vehicle and remote control |
| CN109787679A (en) * | 2019-03-15 | 2019-05-21 | 郭欣 | Police infrared arrest system and method based on multi-rotor unmanned aerial vehicle |
| CN110232307A (en) * | 2019-04-04 | 2019-09-13 | 中国石油大学(华东) | A kind of multi-frame joint face recognition algorithms based on unmanned plane |
| CN110673641A (en) * | 2019-10-28 | 2020-01-10 | 上海工程技术大学 | An intelligent maintenance and inspection system platform for passenger aircraft based on UAV |
| CN111275760A (en) * | 2020-01-16 | 2020-06-12 | 上海工程技术大学 | Unmanned aerial vehicle target tracking system and method based on 5G and depth image information |
-
2020
- 2020-07-17 CN CN202010691797.XA patent/CN111824406A/en active Pending
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107851358A (en) * | 2015-07-09 | 2018-03-27 | 诺基亚技术有限公司 | Monitoring |
| CN106570892A (en) * | 2015-08-18 | 2017-04-19 | 航天图景(北京)科技有限公司 | Moving-target active tracking method based on edge enhancement template matching |
| WO2017115120A1 (en) * | 2015-12-29 | 2017-07-06 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for automated aerial system operation |
| US20180107874A1 (en) * | 2016-01-29 | 2018-04-19 | Panton, Inc. | Aerial image processing |
| CN106774436A (en) * | 2017-02-27 | 2017-05-31 | 南京航空航天大学 | The control system and method for the rotor wing unmanned aerial vehicle tenacious tracking target of view-based access control model |
| US20190057244A1 (en) * | 2017-08-18 | 2019-02-21 | Autel Robotics Co., Ltd. | Method for determining target through intelligent following of unmanned aerial vehicle, unmanned aerial vehicle and remote control |
| CN107817820A (en) * | 2017-10-16 | 2018-03-20 | 复旦大学 | A kind of unmanned plane autonomous flight control method and system based on deep learning |
| CN109324638A (en) * | 2018-12-05 | 2019-02-12 | 中国计量大学 | Four-rotor UAV target tracking system based on machine vision |
| CN109787679A (en) * | 2019-03-15 | 2019-05-21 | 郭欣 | Police infrared arrest system and method based on multi-rotor unmanned aerial vehicle |
| CN110232307A (en) * | 2019-04-04 | 2019-09-13 | 中国石油大学(华东) | A kind of multi-frame joint face recognition algorithms based on unmanned plane |
| CN110673641A (en) * | 2019-10-28 | 2020-01-10 | 上海工程技术大学 | An intelligent maintenance and inspection system platform for passenger aircraft based on UAV |
| CN111275760A (en) * | 2020-01-16 | 2020-06-12 | 上海工程技术大学 | Unmanned aerial vehicle target tracking system and method based on 5G and depth image information |
Non-Patent Citations (1)
| Title |
|---|
| 鱼滨: "《基于MATLAB和遗传算法的图像处理》", 1 September 2015, 西安电子科技大学出版社 * |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112394746A (en) * | 2020-11-23 | 2021-02-23 | 武汉科技大学 | Intelligent epidemic prevention unmanned aerial vehicle based on machine learning and control method thereof |
| CN113268071A (en) * | 2021-01-28 | 2021-08-17 | 北京理工大学 | Unmanned aerial vehicle tracing method and system based on multi-sensor fusion |
| CN113111715B (en) * | 2021-03-13 | 2023-07-25 | 浙江御穹电子科技有限公司 | Unmanned aerial vehicle target tracking and information acquisition system and method |
| CN113111715A (en) * | 2021-03-13 | 2021-07-13 | 浙江御穹电子科技有限公司 | Unmanned aerial vehicle target tracking and information acquisition system and method |
| CN113119082A (en) * | 2021-03-18 | 2021-07-16 | 深圳市优必选科技股份有限公司 | Visual recognition circuit, visual recognition device, and robot |
| CN113052115A (en) * | 2021-04-06 | 2021-06-29 | 合肥工业大学 | Unmanned aerial vehicle airborne vital sign detection method based on video method |
| CN113306741B (en) * | 2021-04-16 | 2024-06-25 | 西安航空职业技术学院 | External unmanned aerial vehicle inspection system and method based on deep learning |
| CN113306741A (en) * | 2021-04-16 | 2021-08-27 | 西安航空职业技术学院 | External winding inspection unmanned aerial vehicle and method based on deep learning |
| CN113625777A (en) * | 2021-09-22 | 2021-11-09 | 福建江夏学院 | Multifunctional flight control circuit and method based on UAV |
| CN115854791A (en) * | 2022-06-14 | 2023-03-28 | 北京中安航信科技有限公司 | Directional sound wave driving and separating system and method for unmanned aerial vehicle |
| CN115924144A (en) * | 2023-01-04 | 2023-04-07 | 电子科技大学 | A quadrotor UAV that can deploy multiple computing and sensing devices inside the body |
| CN116778360A (en) * | 2023-06-09 | 2023-09-19 | 北京科技大学 | A ground target positioning method and device for flapping-wing flying robots |
| CN116778360B (en) * | 2023-06-09 | 2024-03-19 | 北京科技大学 | Ground target positioning method and device for flapping-wing flying robot |
| CN118413561A (en) * | 2024-07-02 | 2024-07-30 | 舟山中远海运重工有限公司 | Unmanned aerial vehicle inspection system based on deep intelligent learning algorithm and data processing method |
| CN118427374A (en) * | 2024-07-05 | 2024-08-02 | 北京领云时代科技有限公司 | Heterogeneous unmanned aerial vehicle collaborative search system and method based on reinforcement learning |
| CN118427374B (en) * | 2024-07-05 | 2024-08-30 | 北京领云时代科技有限公司 | Heterogeneous unmanned aerial vehicle collaborative search system and method based on reinforcement learning |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111824406A (en) | A public safety autonomous inspection quadrotor UAV based on machine vision | |
| KR102254491B1 (en) | Automatic fly drone embedded with intelligent image analysis function | |
| CN110494360B (en) | System and method for providing autonomous photography and photography | |
| CN107209514B (en) | Selective processing of sensor data | |
| EP3497530B1 (en) | Methods and system for autonomous landing | |
| US20200026720A1 (en) | Construction and update of elevation maps | |
| CN107817820A (en) | A kind of unmanned plane autonomous flight control method and system based on deep learning | |
| CN111932588A (en) | Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning | |
| CN108759826B (en) | A UAV motion tracking method based on the fusion of multi-sensing parameters of mobile phones and UAVs | |
| CN205353774U (en) | Accompany unmanned aerial vehicle system of taking photo by plane of shooing aircraft | |
| CN106774436A (en) | The control system and method for the rotor wing unmanned aerial vehicle tenacious tracking target of view-based access control model | |
| CN108107920A (en) | A kind of microminiature twin shaft vision stablizes holder target detection tracing system | |
| CN110498039B (en) | An intelligent monitoring system based on bionic flapping-wing aircraft | |
| CN110333735B (en) | A system and method for realizing secondary positioning of unmanned aerial vehicle in water and land | |
| CN106444843A (en) | Unmanned aerial vehicle relative azimuth control method and device | |
| JP2017500650A (en) | System and method for data recording and analysis | |
| CN205150226U (en) | Air patrol system based on fuselage formula of verting rotor unmanned aerial vehicle | |
| CN104777847A (en) | Unmanned aerial vehicle target tracking system based on machine vision and ultra-wideband positioning technology | |
| CN110104167A (en) | A kind of automation search and rescue UAV system and control method using infrared thermal imaging sensor | |
| CN116360492B (en) | A flapping-wing flying robot target tracking method and system | |
| Rodriguez-Ramos et al. | Towards fully autonomous landing on moving platforms for rotary unmanned aerial vehicles | |
| CN108848348A (en) | A kind of crowd's abnormal behaviour monitoring device and method based on unmanned plane | |
| KR20190060249A (en) | Method for dropping rescue equipment and drone for rescue using the same | |
| CN107069859A (en) | A kind of wireless charging system and method based on unmanned plane base station | |
| CN115202376A (en) | Unmanned aerial vehicle patrols and examines electric power grid management and control platform based on individual soldier removes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201027 |