CN103996181B - A kind of big view field image splicing system and method based on bionical eyes - Google Patents
A kind of big view field image splicing system and method based on bionical eyes Download PDFInfo
- Publication number
- CN103996181B CN103996181B CN201410196649.5A CN201410196649A CN103996181B CN 103996181 B CN103996181 B CN 103996181B CN 201410196649 A CN201410196649 A CN 201410196649A CN 103996181 B CN103996181 B CN 103996181B
- Authority
- CN
- China
- Prior art keywords
- image
- carma
- devkit
- definition
- operating system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
- Toys (AREA)
- Manipulator (AREA)
Abstract
本发明公开了一种基于仿生双眼的大视场图像拼接系统和方法。包括高清摄像机,连接到机载快速处理器SECO CARMA DevKit;通过Zeroconf技术和机器人操作系统ROS(Robot Operating System)将机载快速处理器SECO CARMA DevKit和高性能计算机组成分布式计算网络;高清摄像机摄入高清图像后,通过USB接口将图像传入到机载快速处理器SECO CARMA DevKit,对高清图像进行特征点提取,然后高性能计算机对图像匹配拼接。本发明的方法是将高性能计算机和图像快速处理器CARMA DevKit组建成分布式计算网络创建不同的节点,首先利用机载嵌入式CARMA DevKit进行特征点提取,然后在主机上完成图像拼接。本发明的实施例主要用于图像中的大视场拼接,特别是在移动机器人中的大视场环境感知。
The invention discloses a bionic binocular-based large-field image mosaic system and method. Including high-definition cameras, connected to the onboard fast processor SECO CARMA DevKit; through Zeroconf technology and robot operating system ROS (Robot Operating System), the onboard fast processor SECO CARMA DevKit and high-performance computers form a distributed computing network; high-definition camera shooting After the high-definition image is imported, the image is transferred to the onboard fast processor SECO CARMA DevKit through the USB interface, and the feature points of the high-definition image are extracted, and then the high-performance computer matches and stitches the image. In the method of the invention, a high-performance computer and a fast image processor CARMA DevKit are formed into a distributed computing network to create different nodes, firstly the airborne embedded CARMA DevKit is used to extract feature points, and then image splicing is completed on the host computer. Embodiments of the present invention are mainly used for large-field-of-view stitching in images, especially for large-field-of-view environment perception in mobile robots.
Description
技术领域technical field
本发明公开了一种基于仿生双眼的大视场图像拼接系统和方法,涉及机器人视觉,特征点提取技术和CUDA并行计算领域。The invention discloses a bionic binocular-based large-field image mosaic system and method, and relates to the fields of robot vision, feature point extraction technology and CUDA parallel computing.
背景技术Background technique
机器人视觉是感知周围环境最高效的手段,对移动机器人导航起着至关重要的作用。实时的、大视野的图像信息对于地面移动机器人的导航操作具有非常重要的意义。Robot vision is the most efficient way to perceive the surrounding environment and plays a vital role in the navigation of mobile robots. Real-time, large field of view image information is very important for the navigation operation of ground mobile robots.
图像拼接是指将多幅来自同一场景的具有一定重叠区域的小尺寸图像合成为一幅大尺寸,具有大视角的图像。图像拼接可以突破镜头拍摄角度的局限性,扩大视场,被广泛应用于遥感图像处理、医学图像分析、绘图学、计算机视觉、视频监控、虚拟现实,超分辨率重构和机器人导航等领域。目前由于图像拼接过程中涉及的数据量大、计算密集,因此许多串行处理方法在实际应用中都难以满足实时性要求。Image stitching refers to the synthesis of multiple small-sized images from the same scene with a certain overlapping area into a large-sized image with a large viewing angle. Image stitching can break through the limitations of the shooting angle of the lens and expand the field of view. It is widely used in remote sensing image processing, medical image analysis, cartography, computer vision, video surveillance, virtual reality, super-resolution reconstruction, and robot navigation. At present, due to the large amount of data involved in the image stitching process and the intensive calculation, many serial processing methods are difficult to meet the real-time requirements in practical applications.
目前,在机载平台上直接运算复杂的图像算法,此时机载平台需要搭载专属的FPGA或DSP硬件,并将算法固化在相关的硬件上,并进行相关的优化,降低经济性;或移动机器人机载平台的图像伺服系统中,则将图像信息通过网络传递到上位服务器机器进行处理,对处理的上位服务器要求高。At present, complex image algorithms are directly calculated on the airborne platform. At this time, the airborne platform needs to be equipped with exclusive FPGA or DSP hardware, and the algorithm is solidified on the relevant hardware, and relevant optimization is performed to reduce the economy; or mobile In the image servo system of the robot airborne platform, the image information is transmitted to the upper server machine for processing through the network, which requires high processing of the upper server.
机载平台上作图像处理的分辨率大多数是640×480,对于高清图像无法进行实时处理。由于计算量大,数据多,在移动机器人机载平台上搭载的工业控制计算机无法满足直接作图像拼接的计算。图像拼接过程主要分为图像获取、图像匹配和图像融合3个步骤,其中最核心的技术就是图像匹配。图像匹配的关键是精确确定两幅图像中重叠部分的位置,从而得到两张图像的变换关系。图像拼接有基于特征点的图像匹配算法,该过程特征点的提取计算复杂度较大,可采用GPU并行化加速。Most of the image processing resolutions on the airborne platform are 640×480, which cannot be processed in real time for high-definition images. Due to the large amount of calculation and the large amount of data, the industrial control computer mounted on the mobile robot airborne platform cannot meet the calculation requirements for direct image stitching. The image stitching process is mainly divided into three steps: image acquisition, image matching and image fusion, and the core technology is image matching. The key to image matching is to accurately determine the position of the overlapping parts in the two images, so as to obtain the transformation relationship of the two images. Image stitching has an image matching algorithm based on feature points. The extraction of feature points in this process is computationally complex and can be accelerated by GPU parallelization.
2010年3月,Willow Garage(http://www.willowgarage.com/)公司发布机器人操作系统Robot Operating System(ROS)。 机器人操作系统ROS提供各种库和工具帮助软件开发者开发机器人应用,包括硬件抽象层,硬件驱动,虚拟化工具,消息传递,软件包管理。同时,嵌入式硬件方案提供商SECO(稀科)公司联合英伟达(NVIDIA)公司发布一款面向研究人员的嵌入式GPGPU平台解决方案产品CARMA DevKit,利用显卡并行计算技术提高运算的速度,组建成移动式高性能计算平台。In March 2010, Willow Garage (http://www.willowgarage.com/) released the robot operating system Robot Operating System (ROS). The robot operating system ROS provides various libraries and tools to help software developers develop robot applications, including hardware abstraction layer, hardware drivers, virtualization tools, message passing, and software package management. At the same time, SECO, an embedded hardware solution provider, and NVIDIA jointly released an embedded GPGPU platform solution product CARMA DevKit for researchers. high-performance computing platform.
发明内容Contents of the invention
为了克服上述现有技术的不足,本发明提供了一种基于仿生双眼的大视场图像拼接系统和方法,解决了现有高清实时图像拼接处理速度慢或需要专属硬件设备的问题。In order to overcome the shortcomings of the above-mentioned prior art, the present invention provides a system and method for large-field image stitching based on bionic binoculars, which solves the problems of slow processing speed of high-definition real-time image stitching or the need for dedicated hardware equipment.
为了达到上述目的,本发明的构思是:首先由图像输入系统采集高清的图像;然后将图像传送到SECO CARMA DevKit嵌入式CUDA软硬件平台对实时采集的高清图像进行实时特征点检测,然后将计算结果返回给上位机进行图像拼接的最后处理。In order to achieve the above-mentioned purpose, the idea of the present invention is: firstly, the image input system collects high-definition images; The result is returned to the host computer for the final processing of image stitching.
本发明的基于仿生双眼的大视场图像拼接系统包括:The large field of view image stitching system based on bionic binoculars of the present invention comprises:
(1)高清图像输入,ARTAM-1400MI-USB3高清摄像机通过USB接口传入到处理器上;(1) High-definition image input, ARTAM-1400MI-USB3 high-definition camera is transmitted to the processor through the USB interface;
(2)快速图像处理系统:通过SECO CARMA DevKit嵌入式CUDA软硬件平台并行计算技术,对实时采集的高清图像进行实时图像特这点检测。(2) Fast image processing system: Through the parallel computing technology of SECO CARMA DevKit embedded CUDA software and hardware platform, real-time image feature detection is performed on the high-definition images collected in real time.
(3)上位机订阅SECO CARMA DevKit嵌入式CUDA软硬件并行计算平台的计算结果,进行最后的图像匹配和融合。(3) The upper computer subscribes to the calculation results of the SECO CARMA DevKit embedded CUDA software and hardware parallel computing platform for final image matching and fusion.
根据上述发明构思,本发明采用下述技术方案:According to above-mentioned inventive concept, the present invention adopts following technical scheme:
一种基于仿生双眼的大视场图像拼接系统,包括所述两台高清摄像机,其特征在于:所述的两台高清摄像机分别连接到对应的图像快速处理器CARMA DevKit;所述的图像快速处理器CARMA DevKit通过网络连接到一个交换机;所述的交换机连接到一个主控机和一个DSP处理器。A large field of view image stitching system based on bionic eyes, including the two high-definition cameras, characterized in that: the two high-definition cameras are respectively connected to the corresponding fast image processor CARMA DevKit; the fast image processing The CARMA DevKit is connected to a switch through the network; the switch is connected to a main control machine and a DSP processor.
一种基于仿生双眼的大视场图像拼接方法,采用上述的基于仿生双眼的大视场图像拼接系统进行图像拼接,其特征在于,拼接步骤如下:A bionic binocular-based large field of view image stitching method, using the above-mentioned bionic binocular-based large field of view image stitching system for image stitching, characterized in that the stitching steps are as follows:
步骤①:将主控机设定为机器人操作系统ROS的主机MASTER,由CARMA DevKit创建图像获取节点,由高清摄像机获取高清图像通过USB接口实时传入图像快速处理器CARMADevKit;Step ①: Set the main control machine as the host MASTER of the robot operating system ROS, create an image acquisition node by CARMA DevKit, and obtain high-definition images from the high-definition camera and transmit them to the image fast processor CARMADevKit in real time through the USB interface;
步骤②:由CARMA DevKit利用机器人操作系统ROS分布式计算特点创建SURF算法节点对所获图像进行特征点提取;Step ②: CARMA DevKit utilizes the distributed computing characteristics of the robot operating system ROS to create a SURF algorithm node to extract feature points from the obtained image;
步骤③:由利用CARMA DevKit的计算节点将特征点描述的数据发布到主控机的匹配节点进行特征点最佳匹配。Step ③: The computing node using CARMA DevKit publishes the data described by the feature points to the matching node of the master computer for optimal matching of feature points.
步骤④:主控机的匹配节点计算的特征点最佳匹配结果发布到仿射拼接计算节点进行仿射计算后,将左右摄像头的数据拼接为一幅图像。Step ④: The best matching result of the feature points calculated by the matching node of the master computer is released to the affine stitching computing node for affine calculation, and then the data of the left and right cameras are stitched into one image.
所述的基于仿生双眼的大视场图像拼接方法,其特征在于,所述步骤①的图像获取的具体步骤如下:The described large-field-of-view image stitching method based on bionic binoculars is characterized in that, the specific steps of the image acquisition of the step ① are as follows:
步骤1-1:利用Zeroconf技术将主控机和两个CARMA DevKit互联为网络节点,再将主控机设为机器人操作系统ROS分布式操作系统的MASTER。Step 1-1: Use Zeroconf technology to interconnect the main control machine and two CARMA DevKits as network nodes, and then set the main control machine as the MASTER of the robot operating system ROS distributed operating system.
步骤1-2:用机器人操作系统ROS系统中集成的摄像头操作包uvc_camera创建摄像头打开节点,并将图像数据发布。Step 1-2: Use the camera operation package uvc_camera integrated in the robot operating system ROS system to create a camera opening node and publish the image data.
所述的基于仿生双眼的大视场图像拼接方法,其特征在于,所述步骤②的图像特征点检测的具体步骤如下:The described large field of view image stitching method based on bionic binoculars is characterized in that, the specific steps of the image feature point detection of the step 2. are as follows:
步骤2-1:在CARMA DevKit中安装机器人操作系统ROS,并将CARMA DevKit通过Zeroconf连接到主控机。Step 2-1: Install the robot operating system ROS in CARMA DevKit, and connect CARMA DevKit to the main control machine through Zeroconf.
步骤2-2:利用ROS系统中集成的OpenCV图像处理库,创建分布式SURF算法节点,对所获图像进行特征点提取,将计算结果发布到主机的匹配节点。Step 2-2: Use the OpenCV image processing library integrated in the ROS system to create a distributed SURF algorithm node, extract feature points from the obtained image, and publish the calculation results to the matching node of the host.
所述的基于仿生双眼的大视场图像拼接方法,其特征在于,所述步骤③的图像特征点匹配的具体步骤如下:The described large field of view image stitching method based on bionic binoculars is characterized in that, the specific steps of the image feature point matching of the step 3. are as follows:
步骤3-1:在主控机中安装机器人操作系统ROS,利用Zeroconf技术将主控机设为机器人操作系统ROS整个分布式计算的MASTER。Step 3-1: Install the robot operating system ROS in the master computer, and use Zeroconf technology to set the master computer as the MASTER of the entire distributed computing of the robot operating system ROS.
步骤3-2:创建最佳匹配节点,订阅SURF算法节点发布的图像特征点提取的特征点描述信息并进行特征点最佳匹配。Step 3-2: Create the best matching node, subscribe to the feature point description information of the image feature point extraction published by the SURF algorithm node, and perform the best matching of the feature points.
所述的基于仿生双眼的大视场图像拼接方法,其特征在于,所述步骤④的图像仿射拼接的具体步骤如下:The described large-field-of-view image mosaic method based on bionic binoculars is characterized in that, the specific steps of the image affine mosaic of the step ④ are as follows:
步骤4-1:在主控机创建图像仿射拼接节点,订阅最佳匹配节点发布的特征点最佳匹配的信息和订阅CARMA DevKit图像获取节点发布的图像数据。Step 4-1: Create an image affine stitching node on the host computer, subscribe to the best matching information of feature points published by the best matching node and subscribe to the image data published by the CARMA DevKit image acquisition node.
步骤4-2:将左右两幅图像根据特征点最佳匹配的信息找到单应矩阵后,进行仿射变换后拼接成一幅图像,并进行数据融合。Step 4-2: Find the homography matrix of the left and right images according to the best matching information of the feature points, perform affine transformation, stitch them into one image, and perform data fusion.
本发明与现有技术相比,具有如下显而易见的突出实质性特点和显著优点:本发明在机载平台上能够对输入的高清图像快速地检测出图像中的特征点。这样就不需要机载的专属的图像处理硬件和优化的硬件算法,并快速的将图像特征点检测和图像匹配系统部署到移动机器人平台上。Compared with the prior art, the present invention has the following obvious outstanding substantive features and significant advantages: the present invention can quickly detect the feature points in the input high-definition image on the airborne platform. In this way, there is no need for on-board dedicated image processing hardware and optimized hardware algorithms, and the image feature point detection and image matching system can be quickly deployed on the mobile robot platform.
附图说明Description of drawings
图1为本发明的系统的硬件框图。Fig. 1 is a hardware block diagram of the system of the present invention.
图2为本发明的系统的方法流程图。Fig. 2 is a flow chart of the method of the system of the present invention.
图3为图像拼接效果图。Figure 3 is an image stitching effect diagram.
具体实施方式detailed description
下面结合附图对本发明的优选实施例进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明的一部分实施例。The preferred embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings. Obviously, the described embodiments are only part of the embodiments of the present invention.
实施例一:Embodiment one:
参见图1,本基于仿生双眼的大视场图像拼接系统,包括两个高清摄像机1、2,其特征在于:分别连接到对应的两个图像快速处理器CARMA DevKit 3、4(嵌入式硬件方案提供商SECO稀科公司联合英伟达(NVIDIA)公司发布一款面向研究人员的嵌入式GPGPU平台解决方案产品CARMA DevKit,利用显卡并行计算技术提高运算的速度,组建成移动式高性能计算平台);所述的两个图像快速处理器CARMA DevKit 3、4通过网络连接到一个交换机5;所述的交换机5连接到一个主控机6和一个DSP 7处理器。两个高清摄像机1、2选用日本ARTRAY公司的ARTAM-1400MI-USB3 1400万USB3工业相机;两个图像快速处理器CARMA DevKit 3、4由美国西科(SECO)公司生产,主要用于嵌入式CUDA计算;交换机5 型号是普联公司TP-LINKTL-SF1008+ 8口百兆交换机;主控机6 型号为HP 8470w移动工作站,DSP 7处理器选用TMS320F2812 DSP2812开发板。Referring to Fig. 1, this large-field-of-view image stitching system based on bionic binoculars includes two high-definition cameras 1 and 2, which are characterized in that they are respectively connected to two corresponding fast image processors CARMA DevKit 3 and 4 (embedded hardware solution Provider SECO and NVIDIA jointly released an embedded GPGPU platform solution product CARMA DevKit for researchers, which uses graphics card parallel computing technology to improve the speed of computing and builds a mobile high-performance computing platform); The two image fast processors CARMA DevKit 3 and 4 are connected to a switch 5 through the network; the switch 5 is connected to a main control machine 6 and a DSP 7 processor. Two high-definition cameras 1 and 2 are ARTAM-1400MI-USB3 14 million USB3 industrial cameras from ARTRAY Corporation of Japan; two fast image processors CARMA DevKit 3 and 4 are produced by SECO Corporation of the United States and are mainly used for embedded CUDA Calculation; the switch 5 model is TP-LINKTL-SF1008+ 8-port 100M switch of Pulian Company; the main control machine 6 model is HP 8470w mobile workstation, and the DSP 7 processor uses TMS320F2812 DSP2812 development board.
实施例二:Embodiment two:
参见图1,图2和图3,本基于仿生双眼的大视场图像拼接方法,采用上述系统进行图像拼接,拼接步骤如下:Referring to Figure 1, Figure 2 and Figure 3, this large-field image stitching method based on bionic binoculars uses the above-mentioned system for image stitching, and the stitching steps are as follows:
步骤①:将主控机设定为机器人操作系统ROS(2010年3月,Willow Garage公司发布机器人操作系统Robot Operating System,简称ROS。机器人操作系统ROS提供各种库和工具帮助软件开发者开发机器人应用,包括硬件抽象层,硬件驱动,虚拟化工具,消息传递,软件包管理)的主机MASTER,由CARMA DevKit创建图像获取节点,由高清摄像机获取高清图像通过USB接口实时传入图像快速处理器CARMA DevKit;Step ①: Set the master computer as the robot operating system ROS (in March 2010, Willow Garage released the robot operating system Robot Operating System, referred to as ROS. The robot operating system ROS provides various libraries and tools to help software developers develop robots Application, including hardware abstraction layer, hardware driver, virtualization tool, message delivery, software package management) host MASTER, image acquisition node is created by CARMA DevKit, and high-definition images are acquired by high-definition cameras and transmitted to image fast processor CARMA in real time through USB interface DevKit;
步骤②:由CARMA DevKit利用机器人操作系统ROS分布式计算特点创建SURF算法节点对所获图像进行特征点提取;Step ②: CARMA DevKit utilizes the distributed computing characteristics of the robot operating system ROS to create a SURF algorithm node to extract feature points from the obtained image;
步骤③:由利用CARMA DevKit的计算节点将特征点描述的数据发布到主控机的匹配节点进行特征点最佳匹配。Step ③: The computing node using CARMA DevKit publishes the data described by the feature points to the matching node of the master computer for optimal matching of feature points.
步骤④:主控机的匹配节点计算的特征点最佳匹配结果发布到仿射拼接计算节点进行仿射计算后,将左右摄像头的数据拼接为一幅图像。Step ④: The best matching result of the feature points calculated by the matching node of the master computer is released to the affine stitching computing node for affine calculation, and then the data of the left and right cameras are stitched into one image.
实施例三:Embodiment three:
本实施例与实施例二基本相同,特别之处在于,所述步骤①的图像获取的具体步骤如下:This embodiment is basically the same as Embodiment 2, and the special feature is that the specific steps of image acquisition in step ① are as follows:
步骤1-1:利用Zeroconf技术将主控机和两个CARMA DevKit互联为网络节点,再将主控机设为机器人操作系统ROS分布式操作系统的MASTER。Step 1-1: Use Zeroconf technology to interconnect the main control machine and two CARMA DevKits as network nodes, and then set the main control machine as the MASTER of the robot operating system ROS distributed operating system.
步骤1-2:用机器人操作系统ROS系统中集成的摄像头操作包uvc_camera创建摄像头打开节点,并将图像数据发布。Step 1-2: Use the camera operation package uvc_camera integrated in the robot operating system ROS system to create a camera opening node and publish the image data.
所述步骤②的图像特征点检测的具体步骤如下:The concrete steps of the image feature point detection of described step 2. are as follows:
步骤2-1:在CARMA DevKit中安装机器人操作系统ROS,并将CARMA DevKit通过Zeroconf连接到主控机。Step 2-1: Install the robot operating system ROS in CARMA DevKit, and connect CARMA DevKit to the main control machine through Zeroconf.
步骤2-2:利用ROS系统中集成的OpenCV图像处理库,创建分布式SURF算法节点,对所获图像进行特征点提取,将计算结果发布到主机的匹配节点。Step 2-2: Use the OpenCV image processing library integrated in the ROS system to create a distributed SURF algorithm node, extract feature points from the obtained image, and publish the calculation results to the matching node of the host.
所述步骤③的图像特征点匹配的具体步骤如下:The concrete steps of the image feature point matching of described step 3. are as follows:
步骤3-1:在主控机中安装机器人操作系统ROS,利用Zeroconf技术将主控机设为机器人操作系统ROS整个分布式计算的MASTER。Step 3-1: Install the robot operating system ROS in the master computer, and use Zeroconf technology to set the master computer as the MASTER of the entire distributed computing of the robot operating system ROS.
步骤3-2:创建最佳匹配节点,订阅SURF算法节点发布的图像特征点提取的特征点描述信息并进行特征点最佳匹配。Step 3-2: Create the best matching node, subscribe to the feature point description information of the image feature point extraction published by the SURF algorithm node, and perform the best matching of the feature points.
所述步骤④的图像仿射拼接的具体步骤如下:The concrete steps of the image affine mosaic of described step ④ are as follows:
步骤4-1:在主控机创建图像仿射拼接节点,订阅最佳匹配节点发布的特征点最佳匹配的信息和订阅CARMA DevKit图像获取节点发布的图像数据。Step 4-1: Create an image affine stitching node on the host computer, subscribe to the best matching information of feature points published by the best matching node and subscribe to the image data published by the CARMA DevKit image acquisition node.
步骤4-2:将左右两幅图像根据特征点最佳匹配的信息找到单应矩阵后,进行仿射变换后拼接成一幅图像,并进行数据融合。Step 4-2: Find the homography matrix of the left and right images according to the best matching information of the feature points, perform affine transformation, stitch them into one image, and perform data fusion.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围不仅局限于此,任何熟悉本领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化和替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应为所述以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any changes and substitutions that can be easily imagined by those skilled in the art within the technical scope disclosed in the present invention should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410196649.5A CN103996181B (en) | 2014-05-12 | 2014-05-12 | A kind of big view field image splicing system and method based on bionical eyes |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410196649.5A CN103996181B (en) | 2014-05-12 | 2014-05-12 | A kind of big view field image splicing system and method based on bionical eyes |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN103996181A CN103996181A (en) | 2014-08-20 |
| CN103996181B true CN103996181B (en) | 2017-06-23 |
Family
ID=51310338
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410196649.5A Active CN103996181B (en) | 2014-05-12 | 2014-05-12 | A kind of big view field image splicing system and method based on bionical eyes |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN103996181B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105844202A (en) * | 2015-01-12 | 2016-08-10 | 芋头科技(杭州)有限公司 | Image recognition system and method |
| CN105215998B (en) * | 2015-08-18 | 2017-05-24 | 长安大学 | Spider-imitating multi-vision platform |
| CN106506464B (en) * | 2016-10-17 | 2019-11-12 | 武汉秀宝软件有限公司 | A kind of toy exchange method and system based on augmented reality |
| CN106989730A (en) * | 2017-04-27 | 2017-07-28 | 上海大学 | A kind of system and method that diving under water device control is carried out based on binocular flake panoramic vision |
| CN110516531B (en) * | 2019-07-11 | 2023-04-11 | 广东工业大学 | Identification method of dangerous goods mark based on template matching |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1490594A (en) * | 2003-08-22 | 2004-04-21 | 湖南大学 | Human-like multi-degree-of-freedom stereoscopic binocular vision device |
| CN1932841A (en) * | 2005-10-28 | 2007-03-21 | 南京航空航天大学 | Moving target detection device and method based on bionic compound eyes |
| CN101316210A (en) * | 2008-06-26 | 2008-12-03 | 河海大学 | Intelligent sensing broadband wireless mesh network based on multi-camera biomimetic mechanism |
| CN202512133U (en) * | 2012-03-29 | 2012-10-31 | 河海大学 | Large-scale particle image velocity measuring system based on double-camera visual field splice |
| EP2332011B1 (en) * | 2008-08-18 | 2012-12-26 | Holakovszky, Làszló | Device for displaying panorama |
| CN103473532A (en) * | 2013-09-06 | 2013-12-25 | 上海大学 | Pedestrian fast detection system and method of vehicle-mounted platform |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100796849B1 (en) * | 2006-09-04 | 2008-01-22 | 삼성전자주식회사 | How to take panoramic mosaic photos for mobile devices |
-
2014
- 2014-05-12 CN CN201410196649.5A patent/CN103996181B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1490594A (en) * | 2003-08-22 | 2004-04-21 | 湖南大学 | Human-like multi-degree-of-freedom stereoscopic binocular vision device |
| CN1932841A (en) * | 2005-10-28 | 2007-03-21 | 南京航空航天大学 | Moving target detection device and method based on bionic compound eyes |
| CN101316210A (en) * | 2008-06-26 | 2008-12-03 | 河海大学 | Intelligent sensing broadband wireless mesh network based on multi-camera biomimetic mechanism |
| EP2332011B1 (en) * | 2008-08-18 | 2012-12-26 | Holakovszky, Làszló | Device for displaying panorama |
| CN202512133U (en) * | 2012-03-29 | 2012-10-31 | 河海大学 | Large-scale particle image velocity measuring system based on double-camera visual field splice |
| CN103473532A (en) * | 2013-09-06 | 2013-12-25 | 上海大学 | Pedestrian fast detection system and method of vehicle-mounted platform |
Non-Patent Citations (4)
| Title |
|---|
| High precision target tracking with a compound-eye image sensor;Krishnasamy R 等;《Electrical and Computer Engineering》;20040505;第4卷;第2319-2323页 * |
| 仿生复眼视觉系统标定和大视场图像拼接的技术研究;蔡梦颖;《中国优秀硕士学位论文全文数据库 信息科技辑》;20080115(第1期);参见正文第12-17页第2.2-2.3节,图2.5,图2.8 * |
| 基于仿生双目机械云台的图像跟踪技术研究;刘治湘 等;《机械工程师》;20120610(第6期);第56-58页 * |
| 基于仿生机械云台的声纳图像拼接;陈金波 等;《应用科学学报》;20120331;第30卷(第2期);第158-164页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN103996181A (en) | 2014-08-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11762475B2 (en) | AR scenario-based gesture interaction method, storage medium, and communication terminal | |
| CN110073313B (en) | Interacting with an environment using a parent device and at least one companion device | |
| CN107820593B (en) | Virtual reality interaction method, device and system | |
| CN103996181B (en) | A kind of big view field image splicing system and method based on bionical eyes | |
| Kim et al. | Hybrid DNN training using both synthetic and real construction images to overcome training data shortage | |
| US8681179B2 (en) | Method and system for coordinating collisions between augmented reality and real reality | |
| US20180300551A1 (en) | Identifying a Position of a Marker in an Environment | |
| CN113706689B (en) | Assembly guidance method and system based on Hololens depth data | |
| US11099633B2 (en) | Authoring augmented reality experiences using augmented reality and virtual reality | |
| CN102945564A (en) | True 3D modeling system and method based on video perspective type augmented reality | |
| US11900662B2 (en) | Method, apparatus, and computer program product for training a signature encoding module and a query processing module to identify objects of interest within an image utilizing digital signatures | |
| KR20210087075A (en) | pass-through visualization | |
| CN112258574A (en) | Method and device for marking pose information and computer readable storage medium | |
| CN107368314B (en) | Mechanical manufacturing process course design teaching auxiliary system based on mobile AR and development method | |
| JP2008276301A5 (en) | ||
| CN104656893A (en) | Remote interaction control system and method for physical information space | |
| WO2018119676A1 (en) | Display data processing method and apparatus | |
| CN105261064A (en) | Three-dimensional cultural relic reconstruction system and method based on computer stereo vision | |
| US20180020203A1 (en) | Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium | |
| CN111161398A (en) | Image generation method, device, equipment and storage medium | |
| JP5597064B2 (en) | Information processing apparatus and method | |
| CN109087399B (en) | Method for rapidly synchronizing AR space coordinate system through positioning map | |
| Eskandari et al. | Observation-based diminished reality: a systematic literature review | |
| CN106484350B (en) | Pattern imaging control method based on mechanical arm mechanical movement mechanism | |
| Wang | Realtime 3D Reconstruction with Mobile Devices |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20221129 Address after: 200444 Room 190, Building A, 5/F, Building 1, No. 1000 Zhenchen Road, Baoshan District, Shanghai Patentee after: Jinghai Intelligent Equipment Co.,Ltd. Address before: 200444 No. 99, upper road, Shanghai, Baoshan District Patentee before: Shanghai University |
|
| TR01 | Transfer of patent right |