CN111401297A - Triphibian robot target recognition system and method based on edge calculation and neural network - Google Patents
Triphibian robot target recognition system and method based on edge calculation and neural network Download PDFInfo
- Publication number
- CN111401297A CN111401297A CN202010257495.1A CN202010257495A CN111401297A CN 111401297 A CN111401297 A CN 111401297A CN 202010257495 A CN202010257495 A CN 202010257495A CN 111401297 A CN111401297 A CN 111401297A
- Authority
- CN
- China
- Prior art keywords
- neural network
- layer
- target recognition
- control board
- edge computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
一种基于边缘计算和神经网络的三栖机器人目标识别系统及方法,主要由神经网络和三栖机器人构成,由摄像头采集需救援图像,通过UART串口传输给内部的边缘计算控制板,并经训练好的神经网络进行图片中的目标识别,从而输出目标识别图,并传输给主控板以控制GPS通信模块传输坐标给终端,通过神经网络深度识别目标,另外将神经网络移植到边缘节点,通过边缘计算的方式节省营救和搜寻的时间,大大提高了工作的效率。
An amphibious robot target recognition system and method based on edge computing and neural network, mainly composed of neural network and amphibious robot, a camera collects images to be rescued, and transmits them to an internal edge computing control board through a UART serial port, and has been trained The neural network recognizes the target in the picture, so as to output the target recognition map, and transmit it to the main control board to control the GPS communication module to transmit the coordinates to the terminal, identify the target in depth through the neural network, and transplant the neural network to the edge node, through the edge computing It saves the time of rescue and search, and greatly improves the efficiency of work.
Description
技术领域technical field
本发明属于人工智能图像处理领域,尤其是一种基于边缘计算和神经网络的三栖机器人目标识别系统及方法,利用卷积神经网络进行目标识别,特别适合海上垃圾搜索和海上救援场景。The invention belongs to the field of artificial intelligence image processing, in particular to an amphibious robot target recognition system and method based on edge computing and neural network, uses convolutional neural network for target recognition, and is especially suitable for marine garbage search and marine rescue scenarios.
背景技术Background technique
国家和社会逐渐把发展国民经济的目光转向海洋的开发和利用。然而,我国社会重发展轻安全的现象还普遍存在,安全意识比较薄弱,对海上活动风险认识还不足,船岸管理水平也不高,人员技能水平比较低下,海上设施或设备技术状况也不良,且台风、寒潮、浓雾等自然灾害多发。The country and society gradually turn their attention to the development and utilization of the ocean to the development of the national economy. However, the phenomenon of emphasising development over safety is still widespread in our society, with relatively weak safety awareness, insufficient awareness of the risks of maritime activities, low level of ship and shore management, relatively low skill level of personnel, and poor technical conditions of offshore facilities or equipment. In addition, natural disasters such as typhoons, cold waves, and dense fog occur frequently.
现有的海上搜救体制、机制还不够完善,且运行过程中仍存在诸多问题,如比如海上信号薄弱无法与终端取得及时的联系,从端决策不能立刻完成的等缺陷。未来活跃和频繁的海上活动,势必将加剧海上险情事故的发生概率和频次,海上救援工作面临着前所未有的挑战,对海上搜救的需求将更为迫切且呈现多样化趋势。The existing maritime search and rescue system and mechanism is not perfect, and there are still many problems in the operation process, such as the weak signal at sea and the inability to get timely contact with the terminal, and the decision-making from the terminal cannot be completed immediately. The active and frequent maritime activities in the future will inevitably increase the probability and frequency of maritime dangerous accidents. Maritime rescue work is facing unprecedented challenges, and the demand for maritime search and rescue will become more urgent and show a diversified trend.
此外,由于我国旅游业的兴旺,导致很多海平面上的垃圾堆积,其中有很多塑料瓶是很难降解的材料,严重破坏海上生态平衡,所以海上垃圾的收集迫在眉睫。In addition, due to the prosperity of my country's tourism industry, a lot of garbage accumulates on the sea level. Many plastic bottles are materials that are difficult to degrade, which seriously damages the ecological balance of the sea. Therefore, the collection of marine garbage is imminent.
目前海上营救系统大多采用机器人辅助终端系统完成营救,而机器人并没有自主决策的能力,无法判断当前海域是否存在危险信号,而与终端的信号传送往往需要时间,而当海上信号薄弱的时候,信号传输过慢导致人员抢救不及时。At present, most marine rescue systems use robot-assisted terminal systems to complete rescues, and robots do not have the ability to make independent decisions and cannot determine whether there is a danger signal in the current sea area, and the signal transmission with the terminal often takes time. When the signal at sea is weak, the signal The transmission is too slow, resulting in delayed rescue of personnel.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供了一种基于边缘计算和神经网络的三栖机器人目标识别系统及方法,它可以克服现有边缘节点不具备决策能力的缺点,是一种结构简单、容易实现的目标识别系统,且其识别方法简单易行,具有一定的实用性和发展前景。The purpose of the present invention is to provide an amphibious robot target recognition system and method based on edge computing and neural network, which can overcome the disadvantage that the existing edge nodes do not have the decision-making ability, and is a target recognition system with simple structure and easy realization. , and its identification method is simple and easy to implement, and has certain practicability and development prospects.
本发明采用的技术方案:The technical scheme adopted in the present invention:
一种基于边缘计算和神经网络的三栖机器人目标识别系统,其特征在于它包括神经网络目标识别子系统和三栖机器人;其中,所述三栖机器人内舱中有主控板、边缘计算控制板和GPS(Global Positioning System,全球定位系统)通信模块;所述三栖机器人外侧安装有摄像头下端云台安装有摄像头模块,所述摄像头将采集到的原图通过主控板上的UART(Universal Asynchronous Receiver/Transmitter,通用异步收发传输器)串口传输给三栖机器人内部的边缘计算控制板,边缘计算控制板通过训练好的神经网络进行图片中的目标识别,从而输出目标识别图,然后将输出信号传输给主控板,所述边缘计算控制板上安装有经训练的用于目标识别的神经网络目标识别子系统,用于对待识别图片中的正在海域遇难的人进行目标识别。An amphibious robot target recognition system based on edge computing and neural network, characterized in that it includes a neural network target recognition subsystem and an amphibious robot; wherein, the amphibious robot has a main control board, an edge computing control board and a GPS in the inner cabin of the amphibious robot (Global Positioning System, Global Positioning System) communication module; a camera is installed on the outside of the amphibious robot, and a camera module is installed on the lower end of the pan/tilt, and the camera passes the collected original image through the UART (Universal Asynchronous Receiver/Transmitter on the main control board). , Universal Asynchronous Receiver Transmitter) serial port is transmitted to the edge computing control board inside the amphibious robot, and the edge computing control board recognizes the target in the picture through the trained neural network, so as to output the target recognition map, and then transmit the output signal to the main control. A neural network target recognition subsystem trained for target recognition is installed on the edge computing control board, which is used for target recognition of people who are in distress in the sea in the to-be-recognized picture.
所述主控板是STM32F4芯片。The main control board is an STM32F4 chip.
所述边缘计算控制板采用“CPU(Central Processing Unit,中央处理单元)+GPU(Graphic Processing Unit,图形处理单元)”的异构结构,用于实现神经网络的计算,其中GPU用于辅助CPU进行加速计算。The edge computing control board adopts a heterogeneous structure of "CPU (Central Processing Unit, central processing unit) + GPU (Graphic Processing Unit, graphics processing unit)", which is used to realize the calculation of the neural network, wherein the GPU is used to assist the CPU to perform Speed up computation.
所述神经网络目标识别子系统是卷积神经网络结构,由输入层、隐藏层和输出层构成;所述输入层通过隐藏层连接输出层;所述输入层通过隐藏层连接至输出层,且每一层均设置有全卷积特征提取器,对应的特征提取器的内部设置有卷积核结构。The neural network target recognition subsystem is a convolutional neural network structure, consisting of an input layer, a hidden layer and an output layer; the input layer is connected to the output layer through the hidden layer; the input layer is connected to the output layer through the hidden layer, and Each layer is provided with a fully convolutional feature extractor, and a convolution kernel structure is set inside the corresponding feature extractor.
所述神经网络目标识别子系统的输入层和隐藏层的数量均不少于1个。The number of input layers and hidden layers of the neural network target recognition subsystem is not less than one.
所述神经网络目标识别子系统是利用YOLOV3算法实现的系统;所述YOLOV3算法主体框架为Darknet53,通过使用捷径法使得不相邻的两层网络得已连接,解决了Darknet53层数增加带来的梯度消失。The neural network target recognition subsystem is a system implemented by using the YOLOV3 algorithm; the main frame of the YOLOV3 algorithm is Darknet53, and the two-layer network that is not adjacent is connected by using the shortcut method, which solves the problem caused by the increase in the number of Darknet53 layers. Gradient disappears.
所述Darknet53采用全卷积和Residual结构结合的方式。The Darknet53 adopts a combination of full convolution and Residual structure.
所述隐藏层不能过多也不能很少,比如现在的YOLOV3的采用的是53层,53层是经过反复实验的出来的目前最优解,如果隐藏层的层数过少,网络不能具有必要的学习能力和信息处理能力;反之,若过多,不仅会大大增加网络结构的复杂性,这一点对硬件实现的网络尤其重要,网络在学习过程中更易陷入局部极小点,而且会使网络的学习速度变得很慢。The hidden layer cannot be too much or too little. For example, the current YOLOV3 uses 53 layers, and the 53 layers are the current optimal solution after repeated experiments. If the number of hidden layers is too small, the network cannot have the necessary layers. On the contrary, if it is too much, it will not only greatly increase the complexity of the network structure, which is especially important for the network implemented by hardware, the network is more likely to fall into a local minimum point in the learning process, and it will make the network The learning rate becomes very slow.
一种基于边缘计算和神经网络的三栖机器人目标识别方法,其特征在于它包括以下步骤:An amphibious robot target recognition method based on edge computing and neural network is characterized in that it comprises the following steps:
(1)在边缘计算板上安装Ubuntu系统,随后利用Anaconda内部搭建Tensorflow并且在Tensorflow框架进行深度学习的训练,从而搭建神经网络目标识别子系统;(1) Install the Ubuntu system on the edge computing board, and then use Anaconda to build Tensorflow and conduct deep learning training in the Tensorflow framework to build a neural network target recognition subsystem;
(2)由摄像头对海上遇难情况进行图像信息采集,将采集到的图像信息传递给三栖机器人内舱中的边缘计算控制板中,对其进行图像预处理;(2) The camera collects image information of the marine distress situation, and transmits the collected image information to the edge computing control panel in the interior cabin of the amphibious robot, and performs image preprocessing on it;
(3)设边缘计算控制板接收到的图像信息是S*S大小的图片,原像素点图片通过卷积层输出为现像素点图片,即:原像素点图片在卷积层经过卷积核的过滤处理,得到的像素点图片;(3) Suppose the image information received by the edge computing control board is a picture of S*S size, and the original pixel picture is output as the current pixel picture through the convolution layer, that is, the original pixel picture passes through the convolution kernel in the convolution layer. The filtering process, the obtained pixel image;
所述步骤(3)中的卷积层包含输入层、隐藏层及输出层;所述输入层通过隐藏层连接至输出层,且每一层均设置有全卷积特征提取器,对应的特征提取器的内部设置有卷积核结构;所述隐藏层中有至少2个特征层,且每一个特征层的输入端信号接收此特征层的上一个特征层的输出信号。The convolutional layer in the step (3) includes an input layer, a hidden layer and an output layer; the input layer is connected to the output layer through the hidden layer, and each layer is provided with a fully convolutional feature extractor, and the corresponding features A convolution kernel structure is arranged inside the extractor; there are at least two feature layers in the hidden layer, and the input signal of each feature layer receives the output signal of the previous feature layer of the feature layer.
(4)由边缘计算板控制板中的神经网络目标识别子系统首先对步骤(2)中采集到的图片在卷积层经过深层卷积,进行降维处理,通过线性投影,将高维的数据映射到低维的空间中表示,并期望在所投影的维度上的数据的方差最大,以此达到使用较少的数据维度,保留住较多的原数据点的特性的目的,一直降维度到13层;(4) The neural network target recognition subsystem in the control board of the edge computing board first performs deep convolution on the image collected in step (2) in the convolutional layer, and performs dimension reduction processing. The data is represented in a low-dimensional space, and it is expected that the variance of the data in the projected dimension is the largest, so as to achieve the purpose of using less data dimensions and retaining the characteristics of more original data points, and reducing the dimension all the time. to the 13th floor;
(5)由边缘计算控制板调用Opencv库进行图像的处理,由于步骤(1)中已经进行了神经网络目标识别子系统的搭建和训练,所以图像信号会在其中进行目标的识别和处理,由神经网络将图中的海上遇难的人进行识别,(5) The edge computing control board calls the Opencv library to process the image. Since the neural network target recognition subsystem has been built and trained in step (1), the image signal will be used for target recognition and processing. The neural network identifies the people killed at sea in the picture,
所述步骤(5)中对待识别图片中海上遇难的人进行识别,具体是指:由Darknet53的53层将待识别图片进行分割,然后在识别样本上回归预测出目标边界框及类别标签,如果检测出人的类别标签,那么就会返回信号进行决策,分割后的每个特征层输出一个预测结果,预测出来哪个是海上遇难的人,最后由边缘计算控制板根据置信度大小进行事件的预测,得到最终的预测结果,确定处理图片中是否存在海上遇难的人,由边缘计算控制板做出救援决策,并通过中央控制板上的Uart通信接口将救援信号发送给主控板,此时中央控制板收到信号进入中断服务程序,然后进行援救。In the step (5), the identification of the person in the sea in the picture to be identified specifically refers to: the picture to be identified is segmented by the 53rd layer of Darknet53, and then the target bounding box and the class label are regressed and predicted on the identification sample. If the category label of the person is detected, the signal will be returned to make a decision, and each feature layer after segmentation will output a prediction result to predict which person was killed at sea. Finally, the edge computing control board will predict the event according to the degree of confidence. , get the final prediction result, determine whether there are people in distress at sea in the processing picture, make rescue decisions by the edge computing control board, and send the rescue signal to the main control board through the Uart communication interface on the central control board. The control board receives the signal to enter the interrupt service routine, and then rescue.
本发明的工作原理:三栖机器人采用STM32F407为三栖机器人的主控板,在控制的运行状态利用PID调节电机的转速进行三栖机器人的稳态控制,当摄像头采集到海上的人,三栖机器人会通过调节进行空中的自稳,边缘计算实时或更快的进行数据处理和分析,让数据处理更靠近源,而不是外部数据中心或者云,三栖机器人内部边缘计算板提前训练好了神经网路数据,所以边缘计算板可以进行自主决策,不需要再传输给终端进行神经网络的识别,这就给了三栖机器人自主决策的能力,由边缘计算控制板将步骤3将得到的最终决策结果传回主控板STM32F4进行机器人援救行动。The working principle of the present invention: the amphibious robot adopts STM32F407 as the main control board of the amphibious robot, and uses the PID to adjust the speed of the motor to perform the steady-state control of the amphibious robot. Self-stabilization in the air, edge computing performs data processing and analysis in real time or faster, so that data processing is closer to the source, rather than an external data center or cloud. The internal edge computing board of the amphibious robot has trained the neural network data in advance, so The edge computing board can make autonomous decisions, and does not need to be transmitted to the terminal for recognition by the neural network, which gives the amphibious robot the ability to make autonomous decisions. The edge computing control board sends the final decision result obtained in
普通云计算终端中进行处理和算法决策然后在给机器人传达命令,而边缘计算是将智能和计算推向更接近实际的行动,通过机器人的配合完成海上救援和垃圾搜救会大大提高效率,由于终端和机器人的通信传送时间长,所以利用边缘计算的特点使得机器人具有边缘决策的能力大大提高了效率,由于边缘计算的引入可以实时或更快的进行数据处理和分析,让数据处理更靠近源,而不是外部数据中心或者云,可以缩短延迟时间,海上遇难的人和海洋垃圾进行识别完之后,不需要传输给终端进行决策,使得效率成倍提高。Ordinary cloud computing terminals carry out processing and algorithmic decision-making and then transmit commands to robots, while edge computing pushes intelligence and computing closer to actual actions. The completion of marine rescue and garbage search and rescue through the cooperation of robots will greatly improve efficiency. The communication transmission time with the robot is long, so the use of the characteristics of edge computing makes the robot have the ability to make edge decisions, which greatly improves the efficiency. Due to the introduction of edge computing, data processing and analysis can be performed in real time or faster, making data processing closer to the source, Instead of an external data center or cloud, the delay time can be shortened. After the identification of the victims and marine debris at sea, it does not need to be transmitted to the terminal for decision-making, which doubles the efficiency.
本发明的优越性:针对海上营救和海上垃圾搜寻,通过神经网络深度识别目标,另外将神经网络移植到边缘节点,通过边缘计算的方式节省营救和搜寻的时间,大大提高了工作的效率。The advantages of the present invention: for marine rescue and marine garbage search, the neural network is used to deeply identify the target, and the neural network is transplanted to the edge node to save rescue and search time through edge computing, and greatly improve work efficiency.
附图说明Description of drawings
图1为本发明所涉一种基于边缘计算和神经网络的三栖机器人目标识别方法中神经网络的结构示意图(其中,1为输入层,2为隐藏层,3为输出层)。1 is a schematic diagram of the structure of a neural network in a method for amphibious robot target recognition based on edge computing and neural network according to the present invention (where 1 is an input layer, 2 is a hidden layer, and 3 is an output layer).
图2为本发明所涉一种基于边缘计算和神经网络的三栖机器人目标识别方法中图像处理卷积层刨析图(其中,4为原像素点图片,5为现像素点图片,6为卷积层。)Fig. 2 is a kind of image processing convolution layer analysis diagram in a kind of amphibious robot target recognition method based on edge computing and neural network according to the present invention (wherein, 4 is the original pixel picture, 5 is the current pixel picture, 6 is the volume layer.)
图3为本发明所涉一种基于边缘计算和神经网络的三栖机器人目标识别方法中的神经网络处理流程示意图。FIG. 3 is a schematic diagram of a neural network processing flow in an amphibious robot target recognition method based on edge computing and neural network according to the present invention.
图4为本发明所涉一种基于边缘计算和神经网络的三栖机器人目标识别系统的整体结构示意图。FIG. 4 is a schematic diagram of the overall structure of an amphibious robot target recognition system based on edge computing and neural network according to the present invention.
图5为本发明所涉一种基于边缘计算和神经网络的三栖机器人目标识别方法的流程示意图。FIG. 5 is a schematic flowchart of an amphibious robot target recognition method based on edge computing and neural network according to the present invention.
具体实施方式Detailed ways
实施例:一种基于边缘计算和神经网络的三栖机器人目标识别系统,如图4所示,其特征在于它包括神经网络目标识别子系统和三栖机器人;其中,所述三栖机器人内舱中有主控板、边缘计算控制板和GPS通信模块;所述三栖机器人外侧安装有摄像头下端云台安装有摄像头模块,所述摄像头将采集到的原图通过主控板上的UART串口传输给三栖机器人内部的边缘计算控制板,边缘计算控制板通过训练好的神经网络进行图片中的目标识别,从而输出目标识别图,然后将输出信号传输给主控板,所述边缘计算控制板上安装有经训练的用于目标识别的神经网络目标识别子系统,用于对待识别图片中的正在海域遇难的人进行目标识别。Embodiment: an amphibious robot target recognition system based on edge computing and neural network, as shown in Figure 4, is characterized in that it includes a neural network target recognition subsystem and an amphibious robot; A control board, an edge computing control board and a GPS communication module; a camera is installed on the outside of the amphibious robot, and a camera module is installed on the lower PTZ, and the camera transmits the collected original image to the interior of the amphibious robot through the UART serial port on the main control board The edge computing control board, the edge computing control board recognizes the target in the picture through the trained neural network, so as to output the target recognition map, and then transmit the output signal to the main control board, the edge computing control board is installed with trained The neural network target recognition subsystem for target recognition is used for target recognition of people who are killed in the sea in the to-be-recognized picture.
所述主控板是STM32F4芯片。The main control board is an STM32F4 chip.
所述边缘计算控制板采用“CPU+GPU”的异构结构,用于实现神经网络的计算,其中GPU用于辅助CPU进行加速计算。The edge computing control board adopts the heterogeneous structure of "CPU+GPU", which is used to realize the calculation of the neural network, wherein the GPU is used to assist the CPU to perform accelerated calculation.
所述神经网络目标识别子系统是卷积神经网络结构,由输入层1、隐藏层2和输出层3构成;所述输入层1通过隐藏层2连接输出层3,如图1所示;所述输入层1通过隐藏层2连接至输出层3,且每一层均设置有全卷积特征提取器,对应的特征提取器的内部设置有卷积核结构。The neural network target recognition subsystem is a convolutional neural network structure, consisting of an
所述神经网络目标识别子系统的输入层1和隐藏层2的数量均不少于1个。The number of
所述神经网络目标识别子系统是利用YOLOV3算法实现的系统;所述YOLOV3算法主体框架为Darknet53,通过使用捷径法使得不相邻的两层网络得已连接,解决了Darknet53层数增加带来的梯度消失。The neural network target recognition subsystem is a system implemented by using the YOLOV3 algorithm; the main frame of the YOLOV3 algorithm is Darknet53, and the two-layer network that is not adjacent is connected by using the shortcut method, which solves the problem caused by the increase in the number of Darknet53 layers. Gradient disappears.
所述Darknet53采用全卷积和Residual结构结合的方式。The Darknet53 adopts a combination of full convolution and Residual structure.
所述隐藏层2不能过多也不能很少,比如现在的YOLOV3的采用的是53层,53层是经过反复实验的出来的目前最优解,如果隐藏层2的层数过少,网络不能具有必要的学习能力和信息处理能力;反之,若过多,不仅会大大增加网络结构的复杂性,这一点对硬件实现的网络尤其重要,网络在学习过程中更易陷入局部极小点,而且会使网络的学习速度变得很慢。The
一种基于边缘计算和神经网络的三栖机器人目标识别方法,如图3和图5所示,其特征在于它包括以下步骤:An amphibious robot target recognition method based on edge computing and neural network, as shown in Figure 3 and Figure 5, is characterized in that it includes the following steps:
(1)在边缘计算板上安装Ubuntu系统,随后利用Anaconda内部搭建Tensorflow并且在Tensorflow框架进行深度学习的训练,从而搭建神经网络目标识别子系统;(1) Install the Ubuntu system on the edge computing board, and then use Anaconda to build Tensorflow and conduct deep learning training in the Tensorflow framework to build a neural network target recognition subsystem;
(2)由摄像头对海上遇难情况进行图像信息采集,将采集到的图像信息传递给三栖机器人内舱中的边缘计算控制板中,对其进行图像预处理;(2) The camera collects image information of the marine distress situation, and transmits the collected image information to the edge computing control panel in the interior cabin of the amphibious robot, and performs image preprocessing on it;
(3)设边缘计算控制板接收到的图像信息是S*S大小的图片,原像素点图片4通过卷积层6输出为现像素点图片5,即:原像素点图片4在卷积层6经过卷积核的过滤处理,得到的像素点图片5,如图2所示;(3) Suppose the image information received by the edge computing control board is a picture of size S*S, and the
所述步骤(3)中的卷积层6包含输入层1、隐藏层2及输出层3;所述输入层1通过隐藏层2连接至输出层3,且每一层均设置有全卷积特征提取器,对应的特征提取器的内部设置有卷积核结构;所述隐藏层2中有至少2个特征层,且每一个特征层的输入端信号接收此特征层的上一个特征层的输出信号。The
(4)由边缘计算板控制板中的神经网络目标识别子系统首先对步骤(2)中采集到的图片在卷积层6经过深层卷积,进行降维处理,通过线性投影,将高维的数据映射到低维的空间中表示,并期望在所投影的维度上的数据的方差最大,以此达到使用较少的数据维度,保留住较多的原数据点的特性的目的,一直降维度到13层;(4) The neural network target recognition subsystem in the control board of the edge computing board first performs deep convolution on the image collected in step (2) in the
(5)由边缘计算控制板调用Opencv库进行图像的处理,由于步骤(1)中已经进行了神经网络目标识别子系统的搭建和训练,所以图像信号会在其中进行目标的识别和处理,由神经网络将图中的海上遇难的人进行识别,(5) The edge computing control board calls the Opencv library to process the image. Since the neural network target recognition subsystem has been built and trained in step (1), the image signal will be used for target recognition and processing. The neural network identifies the people killed at sea in the picture,
所述步骤(5)中对待识别图片中海上遇难的人进行识别,具体是指:由Darknet53的53层将待识别图片进行分割,然后在识别样本上回归预测出目标边界框及类别标签,如果检测出人的类别标签,那么就会返回信号进行决策,分割后的每个特征层输出一个预测结果,预测出来哪个是海上遇难的人,最后由边缘计算控制板根据置信度大小进行事件的预测,得到最终的预测结果,确定处理图片中是否存在海上遇难的人,由边缘计算控制板做出救援决策,并通过中央控制板上的Uart通信接口将救援信号发送给主控板,此时中央控制板收到信号进入中断服务程序,然后进行援救。In the step (5), the identification of the person in the sea in the picture to be identified specifically refers to: the picture to be identified is segmented by the 53rd layer of Darknet53, and then the target bounding box and the class label are regressed and predicted on the identification sample. If the category label of the person is detected, the signal will be returned to make a decision, and each feature layer after segmentation will output a prediction result to predict which person was killed at sea. Finally, the edge computing control board will predict the event according to the degree of confidence. , get the final prediction result, determine whether there are people in distress at sea in the processing picture, make rescue decisions by the edge computing control board, and send the rescue signal to the main control board through the Uart communication interface on the central control board. The control board receives the signal to enter the interrupt service routine, and then rescue.
下面通过附图结合具体实施例对本发明作进一步详述,以下实施例只是描述性的,不是限定性的,不能以此限定本发明的保护范围。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. The following embodiments are only descriptive, not restrictive, and cannot limit the protection scope of the present invention.
一种基于边缘计算和神经网络的三栖机器人目标识别系统及方法,该系统主要由神经网络和三栖机器人构成,所述三栖机器人外侧安装有摄像头模块,所述摄像头模块将采集到的原图通过UART串口传输给三栖机器人内部的边缘计算控制板,边缘计算控制板通过训练好的神经网络进行图片中的目标识别,从而输出目标识别图,然后将输出信号传输给主控板STM32F4,主控板STM32F4将控制GPS通信模块传输坐标给终端。An amphibious robot target recognition system and method based on edge computing and neural network, the system is mainly composed of a neural network and an amphibious robot, a camera module is installed on the outside of the amphibious robot, and the camera module passes the collected original image through UART The serial port is transmitted to the edge computing control board inside the amphibious robot, and the edge computing control board recognizes the target in the picture through the trained neural network, thereby outputting the target recognition map, and then transmitting the output signal to the main control board STM32F4, the main control board STM32F4 It will control the GPS communication module to transmit the coordinates to the terminal.
通过ANACONDA搭建tensorflow框架并且在tensorflow中搭建卷积神经网络,并且将神经网络模型移植到边缘计算控制板中,实现边缘计算,减少营救的时间,利用YOLOV3的算法提高目标识别的准确程度,首先摄像头会采集图像信息,将信息传递给三栖机器人中的边缘计算控制板中进行目标识别,首先会接收到S*S大小的图片,经过深层卷积一直降维度13层;Build the tensorflow framework through ANACONDA and build the convolutional neural network in tensorflow, and transplant the neural network model to the edge computing control board to realize edge computing, reduce rescue time, and use YOLOV3 algorithm to improve the accuracy of target recognition. First, the camera It will collect image information and transmit the information to the edge computing control board in the amphibious robot for target recognition. First, it will receive S*S size pictures, and it will reduce the dimension to 13 layers through deep convolution;
Darknet53一共53层卷积,除去最后一个全连接层,实际上是通过1x1卷积实现的,总共52个卷积用于当做主体网络,首先是1个32个过滤器的卷积层核,然后是5组重复的残差单元,这5组残差单元,每个单元由1个单独的卷积层与一组重复执行的卷积层构成,重复执行的卷积层分别重复1次、2次、8次、8次、4次;在每个重复执行的卷积层中,先执行1x1的卷积操作,再执行3x3的卷积操作,过滤器数量先减半,再恢复,一共是52层。残差计算不属于卷积层计算。52=1+(1+1*2)+(1+2*2)+(1+8*2)+(1+8*2)+(1+4*2)每组残差单元的第一个单独的卷积层操作均是一次步长为2的卷积操作,因此整个YOLOV3网络一共降维5次32倍,即:32=2^5,最后输出的特征图维度是13,所以这就是降维到13层了;Darknet53 has a total of 53 layers of convolution, except for the last fully connected layer, which is actually implemented by 1x1 convolution, a total of 52 convolutions are used as the main network, first is a convolution layer kernel of 32 filters, and then It is 5 groups of repeated residual units. Each of these 5 groups of residual units consists of a separate convolutional layer and a set of repeated convolutional layers. The repeated convolutional layers are repeated 1 time, 2 times, 8 times, 8 times, 4 times; in each repeated convolution layer, first perform a 1x1 convolution operation, and then perform a 3x3 convolution operation, the number of filters is first halved, and then restored, a total of 52 floors. The residual calculation is not part of the convolutional layer calculation. 52=1+(1+1*2)+(1+2*2)+(1+8*2)+(1+8*2)+(1+4*2) A single convolutional layer operation is a convolution operation with a step size of 2, so the entire YOLOV3 network has a total of 5 dimensionality reductions of 32 times, that is: 32=2^5, and the final output feature map dimension is 13, so This is the dimensionality reduction to 13 layers;
Darknet53框架在52、26、13这三个层有全卷积特征提取器,对应的特征提取器的内部卷积核结构,多个卷积核交错达到目的,但是没有本质区别都是将维度降到13层;当前特征层的输入有来自于上一层的输出的一部分。每个特征层都有一个输出预测结果,最后根据置信度大小对结果进行线性回归,得到最终的预测结果,最终将得到的目标提取信息传回主控板STM32F4进行命令控制。The Darknet53 framework has a fully convolutional feature extractor in the three layers of 52, 26, and 13, and the corresponding internal convolution kernel structure of the feature extractor. Multiple convolution kernels are interleaved to achieve the purpose, but there is no essential difference. to layer 13; the input of the current feature layer has part of the output from the previous layer. Each feature layer has an output prediction result, and finally performs linear regression on the result according to the confidence level to obtain the final prediction result, and finally transmits the obtained target extraction information back to the main control board STM32F4 for command control.
尽管为说明目的公开了本发明的实施例和附图,但是本领域的技术人员可以理解:在不脱离本发明及所附权利要求的精神和范围内,各种替换、变化和修改都是可能的,因此,本发明的范围不局限于实施例和附图所公开的内容。Although the embodiments and drawings of the present invention are disclosed for illustrative purposes, those skilled in the art will appreciate that various substitutions, changes and modifications are possible without departing from the spirit and scope of the invention and the appended claims Therefore, the scope of the present invention is not limited to the contents disclosed in the embodiments and drawings.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010257495.1A CN111401297A (en) | 2020-04-03 | 2020-04-03 | Triphibian robot target recognition system and method based on edge calculation and neural network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010257495.1A CN111401297A (en) | 2020-04-03 | 2020-04-03 | Triphibian robot target recognition system and method based on edge calculation and neural network |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111401297A true CN111401297A (en) | 2020-07-10 |
Family
ID=71413708
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010257495.1A Pending CN111401297A (en) | 2020-04-03 | 2020-04-03 | Triphibian robot target recognition system and method based on edge calculation and neural network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111401297A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116229275A (en) * | 2023-04-18 | 2023-06-06 | 天津理工大学 | System and method for 6D pose recognition of occluded target based on spherical amphibious robot |
| CN117168545A (en) * | 2023-10-30 | 2023-12-05 | 自然资源部第一海洋研究所 | A method and system for observing ocean phenomena based on local recognition at the buoy end |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140241633A1 (en) * | 2013-02-28 | 2014-08-28 | Raytheon Company | Rapid detection |
| CN107092926A (en) * | 2017-03-30 | 2017-08-25 | 哈尔滨工程大学 | Service robot object recognition algorithm based on deep learning |
| CN109718069A (en) * | 2019-03-06 | 2019-05-07 | 吉林大学 | A kind of guide intelligent terminal for typical crossroad |
| CN109902201A (en) * | 2019-03-08 | 2019-06-18 | 天津理工大学 | A Recommendation Method Based on CNN and BP Neural Network |
| CN110110596A (en) * | 2019-03-29 | 2019-08-09 | 西北大学 | High spectrum image feature is extracted, disaggregated model constructs and classification method |
| CN110163818A (en) * | 2019-04-28 | 2019-08-23 | 武汉理工大学 | A kind of low illumination level video image enhancement for maritime affairs unmanned plane |
| CN110321775A (en) * | 2019-04-08 | 2019-10-11 | 武汉理工大学 | A kind of drowning man's autonomous classification method waterborne based on multi-rotor unmanned aerial vehicle |
| CN110348304A (en) * | 2019-06-06 | 2019-10-18 | 武汉理工大学 | A kind of maritime affairs distress personnel search system being equipped on unmanned plane and target identification method |
| WO2020020472A1 (en) * | 2018-07-24 | 2020-01-30 | Fundación Centro Tecnoloxico De Telecomunicacións De Galicia | A computer-implemented method and system for detecting small objects on an image using convolutional neural networks |
| US20200050893A1 (en) * | 2018-08-10 | 2020-02-13 | Buffalo Automation Group Inc. | Training a deep learning system for maritime applications |
| CN110929697A (en) * | 2019-12-17 | 2020-03-27 | 中国人民解放军海军航空大学 | Neural network target identification method and system based on residual error structure |
-
2020
- 2020-04-03 CN CN202010257495.1A patent/CN111401297A/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140241633A1 (en) * | 2013-02-28 | 2014-08-28 | Raytheon Company | Rapid detection |
| CN107092926A (en) * | 2017-03-30 | 2017-08-25 | 哈尔滨工程大学 | Service robot object recognition algorithm based on deep learning |
| WO2020020472A1 (en) * | 2018-07-24 | 2020-01-30 | Fundación Centro Tecnoloxico De Telecomunicacións De Galicia | A computer-implemented method and system for detecting small objects on an image using convolutional neural networks |
| US20200050893A1 (en) * | 2018-08-10 | 2020-02-13 | Buffalo Automation Group Inc. | Training a deep learning system for maritime applications |
| CN109718069A (en) * | 2019-03-06 | 2019-05-07 | 吉林大学 | A kind of guide intelligent terminal for typical crossroad |
| CN109902201A (en) * | 2019-03-08 | 2019-06-18 | 天津理工大学 | A Recommendation Method Based on CNN and BP Neural Network |
| CN110110596A (en) * | 2019-03-29 | 2019-08-09 | 西北大学 | High spectrum image feature is extracted, disaggregated model constructs and classification method |
| CN110321775A (en) * | 2019-04-08 | 2019-10-11 | 武汉理工大学 | A kind of drowning man's autonomous classification method waterborne based on multi-rotor unmanned aerial vehicle |
| CN110163818A (en) * | 2019-04-28 | 2019-08-23 | 武汉理工大学 | A kind of low illumination level video image enhancement for maritime affairs unmanned plane |
| CN110348304A (en) * | 2019-06-06 | 2019-10-18 | 武汉理工大学 | A kind of maritime affairs distress personnel search system being equipped on unmanned plane and target identification method |
| CN110929697A (en) * | 2019-12-17 | 2020-03-27 | 中国人民解放军海军航空大学 | Neural network target identification method and system based on residual error structure |
Non-Patent Citations (1)
| Title |
|---|
| 刘云;钱美伊;李辉;王传旭;: "深度学习的多尺度多人目标检测方法研究" * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116229275A (en) * | 2023-04-18 | 2023-06-06 | 天津理工大学 | System and method for 6D pose recognition of occluded target based on spherical amphibious robot |
| CN117168545A (en) * | 2023-10-30 | 2023-12-05 | 自然资源部第一海洋研究所 | A method and system for observing ocean phenomena based on local recognition at the buoy end |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107818571B (en) | Ship automatic tracking method and system based on deep learning network and average drifting | |
| CN110223302B (en) | Ship multi-target detection method based on rotation region extraction | |
| CN110728308B (en) | Interactive blind guiding system and method based on improved Yolov2 target detection and voice recognition | |
| WO2021249071A1 (en) | Lane line detection method, and related apparatus | |
| CN114926469B (en) | Semantic segmentation model training method, semantic segmentation method, storage medium and terminal | |
| CN110119718A (en) | A kind of overboard detection and Survivable Control System based on deep learning | |
| CN108597057A (en) | A kind of unmanned plane failure predication diagnostic system and method based on noise deep learning | |
| CN106971152A (en) | A kind of method of Bird's Nest in detection transmission line of electricity based on Aerial Images | |
| CN108230302A (en) | A kind of nuclear power plant's low-temperature receiver marine site invasion marine organisms detection and method of disposal | |
| CN114418930A (en) | Underwater whale target detection method based on light YOLOv4 | |
| CN110135476A (en) | A kind of detection method of personal safety equipment, device, equipment and system | |
| CN114724177B (en) | Human drowning detection method combining Alphapose and YOLOv5s model | |
| CN110569843A (en) | A method for intelligent detection and recognition of mine targets | |
| WO2024139301A1 (en) | Behavior recognition method and apparatus, and electronic device and computer storage medium | |
| CN113487610B (en) | Herpes image recognition method and device, computer equipment and storage medium | |
| CN115171336A (en) | Drowned protection system of beach control | |
| CN118781478B (en) | An unmanned boat obstacle recognition method based on image analysis and its model construction method | |
| CN111401297A (en) | Triphibian robot target recognition system and method based on edge calculation and neural network | |
| CN114332163A (en) | A method and system for high-altitude parabolic detection based on semantic segmentation | |
| WO2022222233A1 (en) | Usv-based obstacle segmentation network and method for generating same | |
| CN107290975A (en) | A kind of house intelligent robot | |
| CN112016373B (en) | An intelligent auxiliary search and rescue system for people in distress on the water based on visual perception and calculation | |
| CN116469164A (en) | Human gesture recognition man-machine interaction method and system based on deep learning | |
| CN112069997A (en) | A method and device for autonomous landing target extraction of unmanned aerial vehicles based on DenseHR-Net | |
| Hu et al. | Detection of underwater plastic waste based on improved YOLOv5n |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200710 |
|
| WD01 | Invention patent application deemed withdrawn after publication |