CN105979203A - Multi-camera cooperative monitoring method and device - Google Patents
Multi-camera cooperative monitoring method and device Download PDFInfo
- Publication number
- CN105979203A CN105979203A CN201610280010.4A CN201610280010A CN105979203A CN 105979203 A CN105979203 A CN 105979203A CN 201610280010 A CN201610280010 A CN 201610280010A CN 105979203 A CN105979203 A CN 105979203A
- Authority
- CN
- China
- Prior art keywords
- foreground
- dimensional
- point
- image
- fused
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19639—Details of the system layout
- G08B13/19641—Multiple cameras having overlapping views on a single scene
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
Abstract
本发明的实施方式提供了一种多摄像机协同监控方法和装置。该多摄像机协同监控方法包括:将监控现场的多台摄像机组合成多个摄像机组合;获取每个摄像机组合在每个采集周期采集的多帧二维影像图,并对每帧二维影像图进行前景提取和投影,得到三维前景图像;将同一采集时刻各个摄像机组合的三维前景图像进行融合处理形成三维融合前景图;根据各个采集时刻的三维融合前景图,以及真实世界中的点在各个采集时刻的三维融合前景图中所对应位置的变化情况,确定目标对象及其空间分布特征和运行分布特征;判断是否有目标情景出现。本发明充分利用了监控现场的三维空间信息,即使在光照较差、大视野、远距离监控中也能得到较好的监测结果。
Embodiments of the present invention provide a multi-camera collaborative monitoring method and device. The multi-camera collaborative monitoring method includes: combining multiple cameras on the monitoring site into multiple camera combinations; obtaining multiple frames of two-dimensional image images collected by each camera combination in each acquisition period, and performing a process on each frame of two-dimensional image images Foreground extraction and projection to obtain a 3D foreground image; the 3D foreground image combined by each camera at the same acquisition time is fused to form a 3D fusion foreground image; according to the 3D fusion foreground image at each acquisition time, and the points in the real world According to the change of the corresponding position in the 3D fusion foreground map, the target object and its spatial distribution characteristics and operating distribution characteristics are determined; and it is judged whether there is a target scene. The invention makes full use of the three-dimensional space information of the monitoring site, and can obtain better monitoring results even in poor illumination, large field of view, and long-distance monitoring.
Description
技术领域technical field
本发明的实施方式涉及智能监控领域,更具体地,本发明的实施方式涉及一种多摄像机协同监控方法及装置。Embodiments of the present invention relate to the field of intelligent monitoring, and more specifically, embodiments of the present invention relate to a multi-camera collaborative monitoring method and device.
背景技术Background technique
本部分旨在为权利要求书中陈述的本发明的实施方式提供背景或上下文。此处的描述不因为包括在本部分中就承认是现有技术。This section is intended to provide a background or context for implementations of the invention that are recited in the claims. The descriptions herein are not admitted to be prior art by inclusion in this section.
用于设备和人员安全的智能监控研究领域中,异常模式的发现和分析一直以来都是一个重点研究的方向,并且有着广泛的应用前景。然而要实现对真实场景下人员和物体的实时准确检测,并判断其行为存在着诸多的挑战,不规律的自然条件下光照变化、摄像机部署的视角的影响、远距离带来的分辨率降低等都是影响准确检测的不利因素。In the research field of intelligent monitoring for equipment and personnel safety, the discovery and analysis of abnormal patterns has always been a key research direction, and has broad application prospects. However, there are many challenges in real-time and accurate detection of people and objects in real scenes, and judging their behavior, such as illumination changes under irregular natural conditions, the impact of camera deployment angle of view, and resolution reduction caused by long distances. All are unfavorable factors that affect accurate detection.
基于摄像机来检测场景异常进行报警,主流的方法主要是基于对摄像机获得的彩色或灰度的二维图像信息进行异常模式的检测。其中一大类是基于人的检测和跟踪的方法,这种方法通过单人的检测和跟踪来实现,其能够粗略地计算得到人在二维图像坐标系中的轨迹,以最终的多人的轨迹分析来进行异常事件的分析,这一方法通常只适合于低密度、中近距离的场景,在真实场景中,杂乱的背景,人与人之间的遮挡,以及光照的变化通常会使得检测跟踪算法失效,从而无法给出准确的结果。再有一类是基于底层特征的方法,这种方法一般先通过建立场景的背景模型,然后利用背景减除法提取场景中的前景,之后提取前景区域的特征,例如前景区域轮廓、纹理、前景的时域特征等作为输入,采用训练的方法进行异常事件的检测,其效果的好坏也受制于环境中光照的变化,图像的分辨率以及事件距离摄像机的远近等因素。Based on the camera to detect scene abnormality and alarm, the mainstream method is mainly based on the abnormal mode detection of the color or grayscale two-dimensional image information obtained by the camera. One of the major categories is the method based on human detection and tracking. This method is realized through the detection and tracking of a single person, which can roughly calculate the trajectory of the person in the two-dimensional image coordinate system, and finally calculate the Trajectory analysis is used to analyze abnormal events. This method is usually only suitable for low-density, medium and short-distance scenes. In real scenes, cluttered backgrounds, occlusions between people, and changes in illumination usually make detection Tracking algorithms fail to give accurate results. Another type is the method based on the underlying features. This method generally first establishes the background model of the scene, and then uses the background subtraction method to extract the foreground in the scene, and then extracts the features of the foreground area, such as the outline of the foreground area, texture, and time of the foreground. Domain features are used as input, and abnormal events are detected by training methods. The effect is also subject to factors such as changes in illumination in the environment, image resolution, and the distance between the event and the camera.
目前已有利用深度信息来进行异常监控的方法,这种方法多是基于主动光,可以精准地获取小范围的深度值(一般范围为1~8米),可用于室内监控。At present, there are methods of using depth information to monitor abnormalities. Most of these methods are based on active light, which can accurately obtain depth values in a small range (generally 1 to 8 meters), and can be used for indoor monitoring.
发明内容Contents of the invention
但是,目前已有的利用深度信息来进行异常监控的方法,存在受室外光照的影响大、工作距离小、视场角小等缺点,无法用于室外的大范围、大广角和远距离的异常监控。However, the existing methods of using depth information for abnormal monitoring have the disadvantages of being greatly affected by outdoor lighting, small working distance, and small field of view, and cannot be used for outdoor large-scale, wide-angle, and long-distance abnormalities. monitor.
为此,本发明提供了一种多摄像机协同监控方法和装置,用于克服现的监控方法在室外大范围、大广角、远距离的异常监控中受限的问题。To this end, the present invention provides a multi-camera cooperative monitoring method and device, which are used to overcome the limitation of existing monitoring methods in outdoor monitoring of abnormalities in a large range, wide angle, and long distance.
在本发明实施方法的第一方面中,提供了一种多摄像机协同监控方法,包括:In the first aspect of the implementation method of the present invention, a multi-camera collaborative monitoring method is provided, including:
步骤A,按照至少两台摄像机组成一个摄像机组合的方式,将部署于监控现场的多台摄像机组合成多个摄像机组合;Step A, according to the way that at least two cameras form a camera combination, multiple cameras deployed at the monitoring site are combined into multiple camera combinations;
步骤B,获取每个摄像机组合在每个采集周期采集的二维影像图,并对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行步骤B1~步骤B3的处理;Step B, obtaining the two-dimensional image images collected by each camera combination in each acquisition period, and performing the processing of steps B1 to B3 on the multi-frame two-dimensional image images sequentially acquired by the current camera combination in the current acquisition period;
步骤B1,对所述多帧二维影像图提取前景,得到多个二维前景图像;将所述多个二维前景图像投影至三维空间中,得到多个三维前景图像;其中,当前采集周期的各个采集时刻、所述多帧二维影像图、所述多个二维前景图像和所述多个三维前景图像具有一一对应的关系;Step B1, extracting the foreground from the multi-frame two-dimensional image images to obtain a plurality of two-dimensional foreground images; projecting the plurality of two-dimensional foreground images into a three-dimensional space to obtain a plurality of three-dimensional foreground images; wherein, the current acquisition cycle There is a one-to-one correspondence between each acquisition moment, the multi-frame two-dimensional image map, the plurality of two-dimensional foreground images, and the plurality of three-dimensional foreground images;
步骤B2,计算真实世界中的点在所述多个二维前景图像中所对应位置的变化情况;其中,所述真实世界中的点为所述二维前景图像中的每个二维前景点在真实世界中对应的点;所述二维前景点为所述二维前景图像中的像素点;Step B2, calculating the change of the corresponding positions of the points in the real world in the plurality of two-dimensional foreground images; wherein, the points in the real world are each two-dimensional foreground points in the two-dimensional foreground images A corresponding point in the real world; the two-dimensional foreground point is a pixel in the two-dimensional foreground image;
步骤B3,根据真实世界中的点在所述多个二维前景图像中所对应位置的变化情况,以及根据所述二维前景图像与所述三维前景图像之间的对应关系,计算真实世界中的点在所述多个三维前景图像中所对应位置的变化情况;Step B3, according to the changes in the corresponding positions of points in the real world in the plurality of two-dimensional foreground images, and according to the correspondence between the two-dimensional foreground images and the three-dimensional foreground images, calculate The changes of the corresponding positions of the points in the plurality of three-dimensional foreground images;
步骤C,针对当前采集周期的各个采集时刻,按照将对应于真实世界中相同点的全部三维前景点融合成为一个三维融合前景点的规则,将同一采集时刻所述监控现场各个摄像机组合对应的三维前景图像中的三维前景点进行融合处理,将融合后得到的全部三维融合前景点组合在一起,形成该采集时刻的三维融合前景图;其中,所述三维前景点为所述三维前景图像的体素点;Step C: For each collection moment of the current collection cycle, according to the rule of merging all 3D foreground points corresponding to the same point in the real world into a 3D fusion foreground point, combine the corresponding 3D points of each camera at the monitoring site at the same collection moment The 3D foreground points in the foreground image are fused, and all the 3D fused foreground points obtained after fusion are combined to form a 3D fused foreground image at the acquisition moment; wherein, the 3D foreground points are volumes of the 3D foreground image Prime point;
步骤D,根据真实世界中的点在所述多个三维前景图像中所对应位置的变化情况,以及根据每个三维融合前景点与融合成每个三维融合前景点的各个三维前景点的对应关系,计算真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况;Step D, according to the change of the corresponding positions of the points in the real world in the plurality of 3D foreground images, and according to the corresponding relationship between each 3D fused foreground point and each 3D foreground point fused into each 3D fused foreground point , calculating the change of the corresponding position of the point in the real world in the multiple 3D fusion foreground images at each acquisition moment of the current acquisition cycle;
步骤E,根据当前采集周期的各个采集时刻的多个三维融合前景图,以及真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定目标对象,以及确定所述目标对象的空间分布特征和运行分布特征;Step E, according to the multiple 3D fusion foreground maps at each collection moment of the current collection cycle, and the changes in the corresponding positions of points in the real world in the multiple 3D fusion foreground maps at each collection moment of the current collection cycle, determine the target object, and determining the spatial distribution characteristics and operational distribution characteristics of the target object;
步骤F,根据所述目标对象的空间分布特征和运行分布特征,判断是否有目标情景出现;Step F, judging whether there is a target scene according to the spatial distribution characteristics and operation distribution characteristics of the target object;
步骤G,输出判断结果,并在有目标情景出现时报警。Step G, output the judgment result, and call the police when there is a target situation.
在本发明实施方法的第二方面中,提供了一种多摄像机协同监控装置,包括:In the second aspect of the implementation method of the present invention, a multi-camera collaborative monitoring device is provided, including:
摄像机划分模块,用于按照至少两台摄像机组成一个摄像机组合的方式,将部署于监控现场的多台摄像机组合成多个摄像机组合;The camera division module is used to combine multiple cameras deployed at the monitoring site into multiple camera combinations according to the way that at least two cameras form a camera combination;
影像图获取模块,用于获取每个摄像机组合在每个采集周期采集的二维影像图;An image map acquisition module, configured to obtain a two-dimensional image map collected by each camera combination in each acquisition cycle;
图像处理模块,用于对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行图像处理;The image processing module is used to perform image processing on the multi-frame two-dimensional image images sequentially collected by the current camera combination in the current acquisition cycle;
所述图像处理模块进一步包括:The image processing module further includes:
第一图像处理模块,用于对所述多帧二维影像图提取前景,得到多个二维前景图像;The first image processing module is used to extract the foreground from the multi-frame two-dimensional image images to obtain a plurality of two-dimensional foreground images;
第二图像处理模块,用于将所述多个二维前景图像投影至三维空间中,得到多个三维前景图像;其中,当前采集周期的各个采集时刻、所述多帧二维影像图、所述多个二维前景图像和所述多个三维前景图像具有一一对应的关系;The second image processing module is configured to project the plurality of two-dimensional foreground images into a three-dimensional space to obtain a plurality of three-dimensional foreground images; wherein, each acquisition moment of the current acquisition cycle, the plurality of frames of two-dimensional image images, the plurality of The plurality of two-dimensional foreground images and the plurality of three-dimensional foreground images have a one-to-one correspondence;
第三图像处理模块,用于计算真实世界中的点在所述多个二维前景图像中所对应位置的变化情况;其中,所述真实世界中的点为所述二维前景图像中的每个二维前景点在真实世界中对应的点;所述二维前景点为所述二维前景图像中的像素点;The third image processing module is used to calculate the change of the corresponding position of the point in the real world in the plurality of two-dimensional foreground images; wherein, the point in the real world is each of the two-dimensional foreground images A point corresponding to a two-dimensional foreground point in the real world; the two-dimensional foreground point is a pixel in the two-dimensional foreground image;
第四图像处理模块,用于根据真实世界中的点在所述多个二维前景图像中所对应位置的变化情况,以及根据所述二维前景图像与所述三维前景图像之间的对应关系,计算真实世界中的点在所述多个三维前景图像中所对应位置的变化情况;The fourth image processing module is configured to change the corresponding positions of points in the real world in the plurality of two-dimensional foreground images, and according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image , calculating changes in positions corresponding to points in the real world in the plurality of three-dimensional foreground images;
融合处理模块,用于针对当前采集周期的各个采集时刻,按照将对应于真实世界中相同点的全部三维前景点融合成为一个三维融合前景点的规则,将同一采集时刻所述监控现场各个摄像机组合对应的三维前景图像中的三维前景点进行融合处理,将融合后得到的全部三维融合前景点组合在一起,形成该采集时刻的三维融合前景图;其中,所述三维前景点为所述三维前景图像的体素点;The fusion processing module is used for combining all the cameras at the monitoring site at the same collection moment according to the rule of merging all 3D foreground points corresponding to the same point in the real world into a 3D fusion foreground point at each collection moment of the current collection cycle The 3D foreground points in the corresponding 3D foreground image are fused, and all the 3D fused foreground points obtained after fusion are combined to form a 3D fused foreground map at the acquisition moment; wherein, the 3D foreground points are the 3D foreground The voxel point of the image;
位移计算模块,用于根据真实世界中的点在所述多个三维前景图像中所对应位置的变化情况,以及根据每个三维融合前景点与融合成每个三维融合前景点的各个三维前景点的对应关系,计算真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况;The displacement calculation module is used for changing the corresponding positions of points in the real world in the plurality of 3D foreground images, and according to each 3D fused foreground point and each 3D foreground point fused into each 3D fused foreground point The corresponding relationship of the points in the real world is calculated at each acquisition moment of the current acquisition cycle and the corresponding position changes in the multiple 3D fusion foreground images;
目标搜索模块,用于根据当前采集周期的各个采集时刻的多个三维融合前景图,以及真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定目标对象,以及确定所述目标对象的空间分布特征和运行分布特征;The target search module is used for multiple 3D fusion foreground maps at each collection moment of the current collection cycle, and changes in the corresponding positions of points in the real world in the multiple 3D fusion foreground maps at each collection moment of the current collection cycle , determining a target object, and determining a spatial distribution feature and an operational distribution feature of the target object;
判断模块,用于根据所述目标对象的空间分布特征和运行分布特征,判断是否有目标情景出现;A judging module, configured to judge whether a target scenario occurs according to the spatial distribution characteristics and operation distribution characteristics of the target object;
输出模块,用于输出判断结果,并在有目标情景出现时报警。The output module is used for outputting judgment results and alarming when a target situation occurs.
借助于上述技术方案,本发明协同地利用了监控现场所有摄像机采集的图像,充分利用了所拍摄场景的三维空间信息,较之现有的方法,即使在光照较差、大广角和远距离的大范围场景中,本发明也可给出更加准确的检测结果,提高了室外监测的检测精度和鲁棒性,尤其适用于巡逻人员少、气候条件差的监控领域。With the help of the above technical solution, the present invention synergistically utilizes the images collected by all the cameras on the monitoring site, and makes full use of the three-dimensional space information of the captured scene. In a large-scale scene, the present invention can also provide more accurate detection results, improve the detection accuracy and robustness of outdoor monitoring, and is especially suitable for monitoring fields with few patrol personnel and poor weather conditions.
附图说明Description of drawings
通过参考附图阅读下文的详细描述,本发明示例性实施方式的上述以及其他目的、特征和优点将变得易于理解。在附图中,以示例性而非限制性的方式示出了本发明的若干实施方式,其中:The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily understood by reading the following detailed description with reference to the accompanying drawings. In the drawings, several embodiments of the invention are shown by way of illustration and not limitation, in which:
图1为本发明应用于油气井设备安全监控场景的示意图;Fig. 1 is a schematic diagram of the application of the present invention to the oil and gas well equipment safety monitoring scene;
图2为本发明提供的多摄像机协同监控方法的流程示意图;FIG. 2 is a schematic flow diagram of a multi-camera collaborative monitoring method provided by the present invention;
图3为对一个摄像机组合在各个采集时刻采集的二维影像图进行图像处理得到二维前景图像和三维前景图像的示意图;Fig. 3 is a schematic diagram of obtaining a two-dimensional foreground image and a three-dimensional foreground image by performing image processing on a two-dimensional image image collected by a camera combination at each collection moment;
图4为将同一采集时刻各个摄像机组合的三维前景图像融合成为三维融合前景图的示意图;Fig. 4 is a schematic diagram of merging the three-dimensional foreground images combined by various cameras at the same acquisition moment into a three-dimensional fusion foreground image;
图5为多摄像机协同监控装置的输入、输出示意图;Fig. 5 is a schematic diagram of input and output of a multi-camera collaborative monitoring device;
图6为多摄像机协同监控装置的结构框图;Fig. 6 is a structural block diagram of a multi-camera cooperative monitoring device;
在附图中,相同或对应的标号表示相同或对应的部分。In the drawings, the same or corresponding reference numerals denote the same or corresponding parts.
具体实施方式detailed description
下面将参考若干示例性实施方式来描述本发明的原理和精神。应当理解,给出这些实施方式仅仅是为了使本领域技术人员能够更好地理解进而实现本发明,而并非以任何方式限制本发明的范围。相反,提供这些实施方式是为了使本公开更加透彻和完整,并且能够将本公开的范围完整地传达给本领域的技术人员。The principle and spirit of the present invention will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are given only to enable those skilled in the art to better understand and implement the present invention, rather than to limit the scope of the present invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
本领域技术技术人员知道,本发明的实施方式可以实现为一种系统、装置、设备、方法或计算机程序产品。因此,本公开可以具体实现为以下形式,即:完全的硬件、完全的软件(包括固件、驻留软件、微代码等),或者硬件和软件结合的形式。Those skilled in the art know that the embodiments of the present invention can be implemented as a system, device, device, method or computer program product. Therefore, the present disclosure may be embodied in the form of complete hardware, complete software (including firmware, resident software, microcode, etc.), or a combination of hardware and software.
需要说明的是,本文中所称的术语“二维前景点”是指二维前景图像的像素点,术语“三维前景点”是指三维前景图像的体素点。It should be noted that the term "2D foreground point" referred to herein refers to a pixel point of a 2D foreground image, and the term "3D foreground point" refers to a voxel point of a 3D foreground image.
根据本发明的实施方式,提出了一种多摄像机协同监控方法和装置。According to an embodiment of the present invention, a multi-camera collaborative monitoring method and device are proposed.
下面参考本发明的若干代表性实施方式,详细阐释本发明的原理和精神。The principle and spirit of the present invention will be explained in detail below with reference to several representative embodiments of the present invention.
发明概述Summary of the invention
本发明充分利用已有的多路摄像机实时监控场景内的异常情况。首先,将监控现场的摄像机两两组合得到多个摄像机组合,实时获取每个摄像机组合采集的二维影像图。其次,对每个摄像机组合采集的二维影像图依次执行计算二维深度图、提取二维前景点并投影得到三维前景点、计算三维前景光流等处理;然后,对所有摄像机组合的三维前景点云进行融合处理得到三维融合前景点云;根据三维融合前景点云和三维前景光流得到鸟瞰图和三维融合前景光流;最后,对鸟瞰图和三维融合前景光流进行模板匹配、模式分类等处理,判断并输出目标情景。The present invention makes full use of the existing multi-channel cameras to monitor the abnormal situation in the scene in real time. First, combine the cameras on the monitoring site in pairs to obtain multiple camera combinations, and obtain the two-dimensional image images collected by each camera combination in real time. Secondly, for the 2D images collected by each camera combination, the processing of calculating 2D depth maps, extracting 2D foreground points and projecting them to obtain 3D foreground points, and calculating 3D foreground optical flow are performed sequentially; 3D fused foreground point cloud is obtained by fusion processing of scenic spot clouds; bird's eye view and 3D fused foreground optical flow are obtained according to 3D fused foreground point cloud and 3D foreground optical flow; finally, template matching and pattern classification are performed on the bird's eye view and 3D fused foreground optical flow Wait for processing, judge and output the target scenario.
在介绍了本发明的基本原理之后,下面具体介绍本发明的各种非限制性实施方式。After introducing the basic principles of the present invention, various non-limiting embodiments of the present invention are described in detail below.
应用场景总览Overview of application scenarios
图1所示描述了本发明应用于油气井设备安全监控场景的一个示意,即通过油气井现场已经部署好的多台摄像机(摄像机a,摄像机b,摄像机c和摄像机d)来监控油气井设备和人员。在多台摄像机的监控范围内,人员和设备被实时监控起来,一旦有异常行为发生,可进行报警。这里异常行为包括但不限于有人打架、有人奔跑、有人偷盗油气井设备、有人放置物品到油气井设备、有人进入不该进入的区域、有人摔倒、有人持有危险的物品(砍刀、长刀)等。As shown in Figure 1, a schematic diagram of the application of the present invention to the safety monitoring scene of oil and gas well equipment is described, that is, the oil and gas well equipment is monitored through multiple cameras (camera a, camera b, camera c and camera d) that have been deployed on the oil and gas well site and personnel. Within the monitoring range of multiple cameras, personnel and equipment are monitored in real time, and an alarm can be issued once any abnormal behavior occurs. Abnormal behaviors here include but are not limited to someone fighting, someone running, someone stealing oil and gas well equipment, someone placing items on oil and gas well equipment, someone entering an area that should not be entered, someone falling, someone holding dangerous items (machetes, long knives) )Wait.
需要说明的是,图1只是示意性给出了本发明的一种应用场景,本发明并不仅限于应用于图1所示的场景,本领域技术人员应理解的是,本发明还可以应用于其他任何监控场景,例如物流仓库监管、部队边防监控等等。It should be noted that Fig. 1 only schematically shows an application scenario of the present invention, and the present invention is not limited to the scenario shown in Fig. 1 , and those skilled in the art should understand that the present invention can also be applied to Any other monitoring scenarios, such as logistics warehouse supervision, military border monitoring, etc.
示例性方法exemplary method
下面结合图1的应用场景,参考图2来描述本发明提供的多摄像机协同监控方法。The following describes the multi-camera collaborative monitoring method provided by the present invention with reference to FIG. 2 in combination with the application scenario of FIG. 1 .
需要注意的是,上述应用场景仅是为了便于理解本发明的精神和原理而示出,本发明的实施方式在此方面不受任何限制。相反,本发明的实施方式可以应用于适用的任何场景。It should be noted that the above application scenarios are only shown for the purpose of understanding the spirit and principle of the present invention, and the implementation manners of the present invention are not limited in this respect. On the contrary, the embodiments of the present invention can be applied to any applicable scene.
如图2所示,本发明提供的多摄像机协同监控方法包括:As shown in Figure 2, the multi-camera collaborative monitoring method provided by the present invention includes:
步骤S1,按照至少两台摄像机组成一个摄像机组合的方式,将部署于监控现场的多台摄像机组合成多个摄像机组合;Step S1, combining multiple cameras deployed at the monitoring site into multiple camera combinations in such a way that at least two cameras form a camera combination;
步骤S2,获取每个摄像机组合在每个采集周期采集的二维影像图,并对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行步骤S21~步骤S23的处理;Step S2, obtaining the two-dimensional image images collected by each camera combination in each acquisition period, and performing the processing of steps S21 to S23 on the multi-frame two-dimensional image images sequentially acquired by the current camera combination in the current acquisition period;
步骤S21,对所述多帧二维影像图提取前景,得到多个二维前景图像;将所述多个二维前景图像投影至三维空间中,得到多个三维前景图像;其中,当前采集周期的各个采集时刻、所述多帧二维影像图、所述多个二维前景图像和所述多个三维前景图像具有一一对应的关系;Step S21, extracting the foreground from the multi-frame two-dimensional image images to obtain multiple two-dimensional foreground images; projecting the multiple two-dimensional foreground images into a three-dimensional space to obtain multiple three-dimensional foreground images; wherein, the current acquisition cycle There is a one-to-one correspondence between each acquisition moment, the multi-frame two-dimensional image map, the plurality of two-dimensional foreground images, and the plurality of three-dimensional foreground images;
步骤S22,计算真实世界中的点在所述多个二维前景图像中所对应位置的变化情况;其中,所述真实世界中的点为所述二维前景图像中的每个二维前景点在真实世界中对应的点;所述二维前景点为所述二维前景图像中的像素点;Step S22, calculating the change of the corresponding positions of the points in the real world in the plurality of two-dimensional foreground images; wherein, the points in the real world are each two-dimensional foreground points in the two-dimensional foreground images A corresponding point in the real world; the two-dimensional foreground point is a pixel in the two-dimensional foreground image;
步骤S23,根据真实世界中的点在所述多个二维前景图像中所对应位置的变化情况,以及所述二维前景图像与所述三维前景图像之间的对应关系,计算真实世界中的点在所述多个三维前景图像中所对应位置的变化情况;Step S23, according to the changes in the corresponding positions of the points in the real world in the plurality of two-dimensional foreground images, and the corresponding relationship between the two-dimensional foreground images and the three-dimensional foreground images, calculate the points in the real world Changes of the corresponding positions of the points in the plurality of three-dimensional foreground images;
步骤S3,针对当前采集周期的各个采集时刻,按照将对应于真实世界中相同点的全部三维前景点融合成为一个三维融合前景点的规则,将同一采集时刻所述监控现场各个摄像机组合对应的三维前景图像中的三维前景点进行融合处理,将融合后得到的全部三维融合前景点组合在一起,形成该采集时刻的三维融合前景图;其中,所述三维前景点为所述三维前景图像的体素点;Step S3, for each collection moment of the current collection cycle, according to the rule of merging all 3D foreground points corresponding to the same point in the real world into a 3D fusion foreground point, combine the corresponding 3D points of each camera at the monitoring site at the same collection moment The 3D foreground points in the foreground image are fused, and all the 3D fused foreground points obtained after fusion are combined to form a 3D fused foreground image at the acquisition moment; wherein, the 3D foreground points are volumes of the 3D foreground image Prime point;
步骤S4,根据真实世界中的点在所述多个三维前景图像中所对应位置的变化情况,以及根据每个三维融合前景点与融合成每个三维融合前景点的各个三维前景点的对应关系,计算真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况;Step S4, according to the change of the corresponding positions of points in the real world in the plurality of 3D foreground images, and according to the corresponding relationship between each 3D fused foreground point and each 3D foreground point fused into each 3D fused foreground point , calculating the change of the corresponding position of the point in the real world in the multiple 3D fusion foreground images at each acquisition moment of the current acquisition cycle;
步骤S5,根据当前采集周期的各个采集时刻的多个三维融合前景图,以及真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定目标对象,以及确定所述目标对象的空间分布特征和运行分布特征;Step S5, according to the multiple 3D fusion foreground maps at each collection time of the current collection cycle, and the changes in the corresponding positions of the points in the real world in the multiple 3D fusion foreground maps at each collection time of the current collection cycle, determine the target object, and determining the spatial distribution characteristics and operational distribution characteristics of the target object;
步骤S6,根据所述目标对象的空间分布特征和运行分布特征,判断是否有目标情景出现;Step S6, according to the spatial distribution characteristics and operation distribution characteristics of the target object, it is judged whether there is a target scene;
步骤S7,输出判断结果,并在有目标情景出现时报警。Step S7, outputting the judgment result, and alarming when a target situation occurs.
以下通过图3~图4所示的图像处理过程来介绍图1所示的多摄像机协同监控方法。The multi-camera cooperative monitoring method shown in FIG. 1 will be introduced below through the image processing process shown in FIGS. 3 to 4 .
假设监控现场共有Z1、Z2、Z3三个摄像机组合,当前采集周期的各个采集时刻分别为T1、T2、T3。Assume that there are three camera combinations of Z1, Z2, and Z3 in the monitoring site, and each acquisition time of the current acquisition cycle is T1, T2, and T3 respectively.
如图3所示,摄像机组合Z1在采集时刻T1、T2、T3依次采集二维影像图,对这3帧二维影像图分别提取前景,得到3个相应的二维前景图像,再将这3个二维前景图像投影到三维空间中,得到3个相应的三维前景图像。其中,将二维前景图像投影到三维空间中得到三维前景图像的具体过程是:将二维前景图像中的每个二维前景点投影至三维空间中得到三维前景点,所有二维前景点对应的三维前景点组合在一起就形成了三维前景图像。三维前景图像实际上是一种点云,其对应的是二维前景图像在真实世界中所对应的物体。As shown in Fig. 3, the camera combination Z1 collects two-dimensional images sequentially at acquisition times T1, T2, and T3, extracts the foreground from these three frames of two-dimensional image images, and obtains three corresponding two-dimensional foreground images. A two-dimensional foreground image is projected into a three-dimensional space, and three corresponding three-dimensional foreground images are obtained. Among them, the specific process of projecting the 2D foreground image into the 3D space to obtain the 3D foreground image is: project each 2D foreground point in the 2D foreground image into the 3D space to obtain the 3D foreground point, and all the 2D foreground points correspond to The three-dimensional foreground points are combined to form a three-dimensional foreground image. The 3D foreground image is actually a point cloud, which corresponds to the object corresponding to the 2D foreground image in the real world.
对于真实世界中的点P和点Q,其在采集时刻T1的二维前景图像中的位置分别为P1和Q1,在该时刻的三维前景图像中的位置分别为P1’和Q1’;在采集时刻T2的二维前景图像中的位置分别为P2和Q2,在该时刻的三维前景图像中的位置分别为P2’和Q2’;在采集时刻T3的二维前景图像中的位置分别为P3和Q3,在该时刻的三维前景图像中的位置分别为P3’和Q3’。For point P and point Q in the real world, their positions in the two-dimensional foreground image at the acquisition time T1 are P1 and Q1 respectively, and their positions in the three-dimensional foreground image at this time are respectively P1' and Q1'; The positions in the 2D foreground image at time T2 are P2 and Q2 respectively, and the positions in the 3D foreground image at this moment are P2' and Q2' respectively; the positions in the 2D foreground image at acquisition time T3 are P3 and Q2 respectively. Q3, the positions in the three-dimensional foreground image at this moment are P3' and Q3' respectively.
根据P1、P2、P3的位置,可以计算出真实世界中的点P在各个采集时刻的二维前景图像中所对应位置的变化情况,这种变化情况可以用二维向量表示;根据Q1、Q2、Q3的位置,可以计算出真实世界中的点Q在各个采集时刻的二维前景图像中所对应位置的变化情况,这种变化情况可以用二维向量表示。According to the positions of P1, P2, and P3, the change of the corresponding position of the point P in the real world in the two-dimensional foreground image at each acquisition moment can be calculated. This change can be calculated by the two-dimensional vector Indicates; according to the positions of Q1, Q2, and Q3, the change of the corresponding position of the point Q in the real world in the two-dimensional foreground image at each acquisition moment can be calculated, and this change can be expressed by a two-dimensional vector express.
根据P1、P1’的对应关系,P2、P2’的对应关系,以及二维向量可以确定真实世界中的点P在采集时刻T1和T2的三维前景图像中所对应位置的变化情况,即确定三维向量类似的,根据P2、P2’的对应关系,P3、P3’的对应关系,以及二维向量可以确定真实世界中的点P在采集时刻T2和T3的三维前景图像中所对应位置的变化情况,即确定三维向量综合以上,即可确定真实世界中的点P在各个采集时刻的三维前景图像中所对应位置的变化情况。同理,可以确定真实世界中的点Q在各个采集时刻的三维前景图像中所对应位置的变化情况。According to the corresponding relationship between P1 and P1', the corresponding relationship between P2 and P2', and the two-dimensional vector It is possible to determine the change of the corresponding position of the point P in the real world in the three-dimensional foreground image at the acquisition time T1 and T2, that is, to determine the three-dimensional vector Similarly, according to the corresponding relationship between P2 and P2', the corresponding relationship between P3 and P3', and the two-dimensional vector It is possible to determine the change of the corresponding position of the point P in the real world in the three-dimensional foreground image at the acquisition time T2 and T3, that is, to determine the three-dimensional vector Based on the above, the change of the corresponding position of the point P in the real world in the three-dimensional foreground image at each acquisition time can be determined. Similarly, it is possible to determine the change of the corresponding position of the point Q in the real world in the three-dimensional foreground image at each acquisition moment.
如图4所示,在采集时刻T1,真实世界中的点X和点Y在摄像机组合Z1的三维前景图像中所对应的位置分别为X-Z1-T1和Y-Z1-T1,在摄像机组合Z2的三维前景图像中所对应的位置分别为X-Z2-T1,Y-Z2-T1,在摄像机组合Z3的三维前景图像中所对应的位置分别为X-Z3-T1,Y-Z3-T1。在该采集时刻T1,将对应于真实世界中的点X的全部三维前景点融合成为一个三维融合前景点,记为将对应于真实世界中的点Y的全部三维前景点融合成为一个三维融合前景点,记为类似于和这种的全部三维融合前景点组成了该采集时刻T1的三维融合前景图。As shown in Figure 4, at the acquisition time T1, the corresponding positions of point X and point Y in the real world in the 3D foreground image of camera combination Z1 are X-Z1-T1 and Y-Z1-T1 respectively, and in the camera combination The corresponding positions in the 3D foreground image of Z2 are X-Z2-T1, Y-Z2-T1, and the corresponding positions in the 3D foreground image of camera combination Z3 are X-Z3-T1, Y-Z3-T1 . At the acquisition time T1, all the 3D foreground points corresponding to point X in the real world are fused into a 3D fused foreground point, denoted as Merge all 3D foreground points corresponding to point Y in the real world into a 3D fused foreground point, denoted as similar to and All such 3D fused foreground points constitute the 3D fused foreground map at the acquisition time T1.
类似的,在采集时刻T2,真实世界中的点X和点Y在摄像机组合Z1的三维前景图像中所对应的位置分别为X-Z1-T2和Y-Z1-T2,在摄像机组合Z2的三维前景图像中所对应的位置分别为X-Z2-T2,Y-Z2-T2,在摄像机组合Z3的三维前景图像中所对应的位置分别为X-Z3-T2,Y-Z3-T2。在该采集时刻T2,将对应于真实世界中的点X的全部三维前景点融合成为一个三维融合前景点,记为将对应于真实世界中的点Y的全部三维前景点融合成为一个三维融合前景点,记为类似于和这种的全部三维融合前景点组成了该采集时刻T1的三维融合前景图。Similarly, at the acquisition time T2, the corresponding positions of point X and point Y in the real world in the 3D foreground image of camera combination Z1 are X-Z1-T2 and Y-Z1-T2 respectively, and in the 3D foreground image of camera combination Z2 The corresponding positions in the foreground image are respectively X-Z2-T2 and Y-Z2-T2, and the corresponding positions in the 3D foreground image of the camera combination Z3 are respectively X-Z3-T2 and Y-Z3-T2. At the acquisition time T2, all the 3D foreground points corresponding to the point X in the real world are fused into a 3D fused foreground point, denoted as Merge all 3D foreground points corresponding to point Y in the real world into a 3D fused foreground point, denoted as similar to and All such 3D fused foreground points constitute the 3D fused foreground map at the acquisition time T1.
类似的,在采集时刻T3,真实世界中的点X和点Y在摄像机组合Z1的三维前景图像中所对应的位置分别为X-Z1-T3和Y-Z1-T3,在摄像机组合Z2的三维前景图像中所对应的位置分别为X-Z2-T3,Y-Z2-T3,在摄像机组合Z3的三维前景图像中所对应的位置分别为X-Z3-T3,Y-Z3-T3。在该采集时刻T3,将对应于真实世界中的点X的全部三维前景点融合成为一个三维融合前景点,记为将对应于真实世界中的点Y的全部三维前景点融合成为一个三维融合前景点,记为类似于和这种的全部三维融合前景点组成了该采集时刻T1的三维融合前景图。Similarly, at the acquisition time T3, the corresponding positions of point X and point Y in the real world in the 3D foreground image of camera combination Z1 are respectively X-Z1-T3 and Y-Z1-T3, and in the 3D foreground image of camera combination Z2 The corresponding positions in the foreground image are respectively X-Z2-T3 and Y-Z2-T3, and the corresponding positions in the 3D foreground image of the camera combination Z3 are respectively X-Z3-T3 and Y-Z3-T3. At the acquisition time T3, all the 3D foreground points corresponding to the point X in the real world are fused into a 3D fused foreground point, denoted as Merge all 3D foreground points corresponding to point Y in the real world into a 3D fused foreground point, denoted as similar to and All such 3D fused foreground points constitute the 3D fused foreground map at the acquisition time T1.
结合图3和图4可知,三维前景点X-Z1-T1、X-Z1-T2、X-Z1-T3之间的关系与三维前景点P1’、P2’、P3’之间的关系一致。根据前面的说明已经计算出真实世界中的点P在采集时刻T1、T2和T3的三维前景图像中所对应位置的变化情况,即三维向量 也就是说,此处的三维前景点X-Z1-T1、X-Z1-T2、X-Z1-T3所在位置的变化情况可以被计算出来,记为三维向量 Combining FIG. 3 and FIG. 4, it can be seen that the relationship among the three-dimensional foreground points X-Z1-T1, X-Z1-T2, and X-Z1-T3 is consistent with the relationship among the three-dimensional foreground points P1', P2', and P3'. According to the previous description, the change of the corresponding position of the point P in the real world in the three-dimensional foreground image at the acquisition time T1, T2 and T3 has been calculated, that is, the three-dimensional vector In other words, the changes of the positions of the three-dimensional foreground points X-Z1-T1, X-Z1-T2, and X-Z1-T3 here can be calculated and recorded as three-dimensional vectors
同样的,三维前景点X-Z2-T1、X-Z2-T2、X-Z2-T3所在位置的变化情况也可以被计算出来,记为三维向量 Similarly, the changes in the positions of the three-dimensional foreground points X-Z2-T1, X-Z2-T2, and X-Z2-T3 can also be calculated and recorded as three-dimensional vectors
同样的,三维前景点X-Z3-T1、X-Z3-T2、X-Z3-T3所在位置的变化情况也可以被计算出来,记为三维向量 Similarly, the changes in the positions of the three-dimensional foreground points X-Z3-T1, X-Z3-T2, and X-Z3-T3 can also be calculated and recorded as three-dimensional vectors
在采集时刻T1,真实世界中的点X对应于三维前景点X-Z1-T1、X-Z2-T1、X-Z3-T1;在采集时刻T2,真实世界中的点X对应于三维前景点X-Z1-T2、X-Z2-T2、X-Z3-T2;在采集时刻T3,真实世界中的点X对应于三维前景点X-Z1-T3、X-Z2-T3、X-Z3-T3;At the collection time T1, the point X in the real world corresponds to the 3D foreground points X-Z1-T1, X-Z2-T1, X-Z3-T1; at the collection time T2, the point X in the real world corresponds to the 3D foreground points X-Z1-T2, X-Z2-T2, X-Z3-T2; At the acquisition time T3, the point X in the real world corresponds to the 3D foreground point X-Z1-T3, X-Z2-T3, X-Z3- T3;
根据三维融合前景点与三维前景点X-Z1-T1、X-Z2-T1、X-Z3-T1之间的对应关系,与三维前景点X-Z1-T2、X-Z2-T2、X-Z3-T2之间的对应关系,与三维前景点X-Z1-T3、X-Z2-T3、X-Z3-T3之间的对应关系,以及根据三维向量 就可以计算出三维融合前景点所在位置的变化情况,可以用三维向量表示,即真实世界中的点X在各个采集时刻的三维融合前景图中所对应位置的变化情况。Fusion of foreground points according to 3D Correspondence with the three-dimensional foreground points X-Z1-T1, X-Z2-T1, X-Z3-T1, Correspondence with the three-dimensional foreground points X-Z1-T2, X-Z2-T2, X-Z3-T2, Correspondence with the three-dimensional foreground points X-Z1-T3, X-Z2-T3, X-Z3-T3, and according to the three-dimensional vector 3D fusion foreground point The change of the location, you can use the three-dimensional vector Indicates the change of the corresponding position of the point X in the real world in the 3D fusion foreground map at each acquisition moment.
本发明协同地利用了监控现场所有摄像机采集的图像,充分利用了所拍摄场景的三维空间信息,即使在光照较差、大广角和远距离的大范围场景中,也可给出更加准确的检测结果,提高了室外监测的检测精度和鲁棒性,尤其适用于巡逻人员少、气候条件差的监控领域,例如油田井作业现场、物流仓库监管、部队边防监控等领域。The present invention synergistically utilizes the images collected by all the cameras on the monitoring site, fully utilizes the three-dimensional spatial information of the captured scene, and can provide more accurate detection even in a large-scale scene with poor illumination, large wide-angle and long-distance As a result, the detection accuracy and robustness of outdoor monitoring are improved, and it is especially suitable for monitoring fields with few patrol personnel and poor weather conditions, such as oil field well operation sites, logistics warehouse supervision, and military border monitoring.
结合图3~图4,分别对本方法中的每个步骤进行详细介绍。Each step in the method is introduced in detail with reference to FIGS. 3 to 4 .
步骤S1,按照至少两台摄像机组成一个摄像机组合的方式,将部署于监控现场的多台摄像机组合成多个摄像机组合。In step S1, multiple cameras deployed at the monitoring site are combined into multiple camera combinations in such a manner that at least two cameras form a camera combination.
本步骤中形成摄像机组合的目的是利用摄像机组合中各台摄像机所采集的影像来计算所拍摄场景的深度值。The purpose of forming the camera combination in this step is to use the images collected by each camera in the camera combination to calculate the depth value of the captured scene.
具体实施时,可选地,可以将监控现场的摄像机两两分组形成若干摄像机组合。摄像机组合中的两个摄像机所采集的两路二维影像图分别相当于人的双眼所看到的二维影像,可用于计算深度值,即二维影像图中的像素点在真实世界中所对应的点(所拍摄场景中物体表面上的点)到摄像机的距离。During specific implementation, optionally, the cameras on the monitoring site may be grouped in pairs to form several camera combinations. The two 2D images collected by the two cameras in the camera combination are respectively equivalent to the 2D images seen by the human eyes, and can be used to calculate the depth value, that is, the pixel points in the 2D image in the real world. The distance from the corresponding point (point on the surface of the object in the captured scene) to the camera.
基于人眼的三维视觉原理,要向利用摄像机组合中的两个摄像机所采集的二维影像图计算深度值,这就要求位于同一摄像机组合中的两个摄像机所覆盖的场景需要存在一定的交集。一般情况下,位置临近的摄像机所覆盖的场景会存在交集,基于这点考虑,具体实施时可以将监控现场任意两个位置相距小于一预设距离的摄像机组成一个摄像机组合。例如图1中的四台摄像机a、b、c、d,根据其装设位置配对,最后得到三个摄像机组合(a,b),(b,c),(c,d)。Based on the three-dimensional vision principle of the human eye, it is necessary to calculate the depth value from the two-dimensional image captured by the two cameras in the camera combination, which requires that the scenes covered by the two cameras in the same camera combination need to have a certain intersection . In general, the scenes covered by the adjacent cameras will overlap. Based on this consideration, any two cameras at the monitoring site that are less than a preset distance away from each other can be combined into a camera combination during specific implementation. For example, the four cameras a, b, c, and d in Figure 1 are paired according to their installation positions, and finally three camera combinations (a, b), (b, c), (c, d) are obtained.
本步骤可以利用远程终端单元(Remote Terminal Unit,RTU)获取每台摄像机基于网络时间协议(Network Time Protocol,NTP)所采集的图像帧,即二维影像图。In this step, a remote terminal unit (Remote Terminal Unit, RTU) can be used to obtain image frames collected by each camera based on a Network Time Protocol (Network Time Protocol, NTP), that is, a two-dimensional image map.
根据摄像机功能的不同,所采集的二维影像图可能是灰度图像,也可能是彩色图像(如RGB彩色图像)。According to different functions of the camera, the collected two-dimensional image may be a grayscale image or a color image (such as an RGB color image).
在实际监控现场,各个摄像机的型号、参数等可能不同,拍摄得到的图像也可能大小、形状不一,考虑到这一点,具体实施时,本发明还可以对各个摄像机采集的二维影像图进行畸变校正和对齐处理。例如,本步骤可以采用基于棋盘格的方法得到各个摄像机的畸变矩阵、摄像机内参和外参,并基于得到的各个摄像机的畸变矩阵、内参和外参对相应摄像机采集的二维影像图进行畸变矫正和对齐处理,具体实现可参考OpenCV(Open SourceComputer Vision Library)提供的方法,本文不再赘述。In the actual monitoring site, the models and parameters of each camera may be different, and the images obtained by shooting may also be different in size and shape. Considering this, during specific implementation, the present invention can also be used for the two-dimensional images collected by each camera. Distortion correction and alignment processing. For example, in this step, a checkerboard-based method can be used to obtain the distortion matrix, camera internal parameters, and external parameters of each camera, and based on the obtained distortion matrix, internal parameters, and external parameters of each camera, perform distortion correction on the two-dimensional image captured by the corresponding camera and alignment processing, the specific implementation can refer to the method provided by OpenCV (Open Source Computer Vision Library), which will not be repeated in this article.
步骤S2,获取每个摄像机组合在每个采集周期采集的二维影像图,并对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行步骤S21~步骤S23的图像处理:Step S2, obtaining the two-dimensional image images collected by each camera combination in each acquisition cycle, and performing the image processing in steps S21 to S23 on the multi-frame two-dimensional image images sequentially collected by the current camera combination in the current acquisition cycle:
步骤S21,对所述多帧二维影像图提取前景,得到多个二维前景图像;将所述多个二维前景图像投影至三维空间中,得到多个三维前景图像;其中,当前采集周期的各个采集时刻、所述多帧二维影像图、所述多个二维前景图像和所述多个三维前景图像具有一一对应的关系。Step S21, extracting the foreground from the multi-frame two-dimensional image images to obtain multiple two-dimensional foreground images; projecting the multiple two-dimensional foreground images into a three-dimensional space to obtain multiple three-dimensional foreground images; wherein, the current acquisition cycle There is a one-to-one correspondence between each acquisition moment of the multiple frames of two-dimensional image images, the multiple two-dimensional foreground images, and the multiple three-dimensional foreground images.
具体的,本步骤在完成对二维影像图提取前景的过程时,可以采用先进行背景建模(例如是静态背景建模或者混合高斯背景建模等方法),然后利用背景减除的方法。Specifically, when completing the process of extracting the foreground from the two-dimensional image in this step, background modeling (for example, static background modeling or mixed Gaussian background modeling) can be used first, and then the method of background subtraction can be used.
具体的,本步骤在完成将二维前景图像投影至三维空间中得到三维前景图像的过程时,需要考虑摄像机成像的二维空间与真实世界的三维空间之间的转换关系。Specifically, in this step, when completing the process of projecting the 2D foreground image into the 3D space to obtain the 3D foreground image, it is necessary to consider the conversion relationship between the 2D space imaged by the camera and the 3D space of the real world.
可选地,步骤S21中可以按照步骤S211~步骤S215的过程将二维前景图像投影至三维空间中:Optionally, in step S21, the two-dimensional foreground image can be projected into three-dimensional space according to the process of step S211 to step S215:
步骤S211,在二维前景图像对应的二维影像图中,确定每个二维前景点的二维坐标;Step S211, determining the two-dimensional coordinates of each two-dimensional foreground point in the two-dimensional image map corresponding to the two-dimensional foreground image;
步骤S212,在二维前景图像对应的二维深度图中,确定每个二维前景点的深度值。其中,二维深度图与摄像机采集的二维影像图采用相同的坐标轴,对于具有相同坐标的像素点来说,二维影像图中的像素值是该像素点的影像信息,而二维深度图中的像素值是该像素点在真实世界中对应的点到摄像机的距离(即深度值)。Step S212, determining the depth value of each 2D foreground point in the 2D depth map corresponding to the 2D foreground image. Among them, the two-dimensional depth image and the two-dimensional image image collected by the camera use the same coordinate axes. For pixels with the same coordinates, the pixel value in the two-dimensional image image is the image information of the pixel point, while the two-dimensional depth image The pixel value in the figure is the distance from the corresponding point of the pixel in the real world to the camera (that is, the depth value).
在计算深度值时,可以采用传统的块匹配、动态规划法、图割法、半全局匹配等方法。可选地,本发明也可以采用如下方法计算深度值:首先,根据传统视差计算方法得到初始视差;然后,对二维影像图中的所有像素构建图模型,其中图模型的结点为像素的视差值,图模型的边为像素之间的相似度量;最后,像素的视差值通过图模型中的多次迭代传播来达到全局的最优,根据摄像机的外参和内参将视差信息转换为深度值。When calculating the depth value, traditional block matching, dynamic programming method, graph cut method, semi-global matching and other methods can be used. Optionally, the present invention can also use the following method to calculate the depth value: first, the initial disparity is obtained according to the traditional disparity calculation method; then, a graphical model is constructed for all pixels in the two-dimensional image map, wherein the nodes of the graphical model are pixels. Disparity value, the edge of the graphical model is the similarity measure between pixels; finally, the disparity value of the pixel is propagated through multiple iterations in the graphical model to achieve the global optimum, and the disparity information is converted according to the external and internal parameters of the camera is the depth value.
步骤S213,根据二维前景点的二维坐标和深度值,将二维前景点投影到摄像机坐标系中。其中,该摄像机坐标系是指摄像机组合中采集该二维深度图(从中提取该二维前景图像)的摄像机的坐标系。Step S213, project the two-dimensional foreground point into the camera coordinate system according to the two-dimensional coordinates and the depth value of the two-dimensional foreground point. Wherein, the camera coordinate system refers to the coordinate system of the camera that collects the 2D depth map (from which the 2D foreground image is extracted) in the camera combination.
摄像机坐标系是和观察者密切相关的坐标系。摄像机坐标系类似于二维影像图的坐标系,差别在于摄像机坐标系处于三维空间中,而二维影像图的坐标系在二维空间里。摄像机坐标系的三个坐标轴中,其中两个分别与二维影像图的两个坐标轴平行,另外一个是垂直于二维影像图。摄像机坐标系的原点为二维影像图的中心。The camera coordinate system is a coordinate system closely related to the observer. The camera coordinate system is similar to the coordinate system of the 2D image, the difference is that the camera coordinate system is in the 3D space, while the coordinate system of the 2D image is in the 2D space. Among the three coordinate axes of the camera coordinate system, two of them are parallel to the two coordinate axes of the two-dimensional image map, and the other is perpendicular to the two-dimensional image map. The origin of the camera coordinate system is the center of the 2D image.
可选地,本步骤可以采用如下公式完成将二维前景点投影到摄像机坐标系的投影过程:Optionally, in this step, the following formula can be used to complete the projection process of projecting the two-dimensional foreground point to the camera coordinate system:
其中,(u,v)是二维前景点的二维坐标,分别对应于二维影像图的两坐标轴;z是二维前景点的深度值;cu和cv是二维影像图的中心坐标;fu和fv分别是采集该帧二维影像图的摄像机在所述两坐标轴方向的焦距;UC、VC、ZC是二维前景点在摄像机坐标系中的投影点的坐标。Among them, (u, v) are the two-dimensional coordinates of the two-dimensional foreground point, corresponding to the two coordinate axes of the two-dimensional image map; z is the depth value of the two-dimensional foreground point; c u and c v are the two-dimensional image map Center coordinates; f u and f v are the focal lengths of the camera that collects the two-dimensional image of the frame in the direction of the two coordinate axes; U C , V C , Z C are the projection points of the two-dimensional foreground point in the camera coordinate system coordinate of.
步骤S214,将二维前景点在摄像机坐标系中的投影点继续投影到世界坐标系中,将在世界坐标系中得到的投影点确定为三维前景点。Step S214, continue to project the projection point of the two-dimensional foreground point in the camera coordinate system to the world coordinate system, and determine the projected point obtained in the world coordinate system as the three-dimensional foreground point.
由于不同摄像机安放的位置、高度、角度等各有不同,为了在监控现场描述不同摄像机的绝对位置和相对位置,采用世界坐标系来描述每个摄像机的位置、高度、角度。摄像机坐标系与世界坐标系之间可以用旋转变换矩阵和平移变换矩阵来转换。Since different cameras have different positions, heights, and angles, in order to describe the absolute and relative positions of different cameras at the monitoring site, the world coordinate system is used to describe the position, height, and angle of each camera. The rotation transformation matrix and translation transformation matrix can be used to convert between the camera coordinate system and the world coordinate system.
可选地,本步骤可以采用如下公式完成将二维前景点在摄像机坐标系中的投影点继续投影到世界坐标系中:Optionally, this step can use the following formula to continue projecting the projection point of the two-dimensional foreground point in the camera coordinate system to the world coordinate system:
其中,(UW,VW,ZW)是三维前景点的三维坐标,R、T分别表示采集该帧二维影像图的该台摄像机的摄像机坐标系相对于世界坐标系的旋转变换矩阵和平移变换矩阵。Among them, (U W , V W , Z W ) are the three-dimensional coordinates of the three-dimensional foreground point, R and T represent the rotation transformation matrix and The translation transformation matrix.
为了完成上述过程,需要确定每个摄像机的摄像机坐标系相对于世界坐标系的旋转变换矩阵和平移变换矩阵。In order to complete the above process, it is necessary to determine the rotation transformation matrix and translation transformation matrix of the camera coordinate system of each camera relative to the world coordinate system.
可选地,步骤S214可以采用步骤S2141~步骤S2145的过程确定旋转变换矩阵和平移变换矩阵:Optionally, step S214 can use the process of steps S2141 to S2145 to determine the rotation transformation matrix and translation transformation matrix:
步骤S2141,将二维影像图中用于显示地面的点确定为地面点。Step S2141, determining the points used to display the ground in the two-dimensional image map as ground points.
具体的,本步骤是将二维影像图中表示的影像信息为地面的点确定为地面点。Specifically, this step is to determine a point whose image information represented in the two-dimensional image map is the ground as a ground point.
步骤S2142,在二维影像图中确定所有地面点的二维坐标,并在该帧二维影像图对应的二维深度图中确定所有地面点的深度值。Step S2142, determine the two-dimensional coordinates of all ground points in the two-dimensional image map, and determine the depth values of all ground points in the two-dimensional depth map corresponding to the two-dimensional image frame.
假设二维影像图中某一地面点的二维坐标为(u’,v’),由于二维影像图及其对应的二维深度图是采用相同的坐标轴,因此,二维深度图中坐标为(u’,v’)的点的像素值(z’)即为该地面点的深度值。Assuming that the two-dimensional coordinates of a certain ground point in the two-dimensional image map are (u', v'), since the two-dimensional image map and its corresponding two-dimensional depth map use the same coordinate axis, the two-dimensional depth map The pixel value (z') of the point whose coordinates are (u', v') is the depth value of the ground point.
具体实施时,如果摄像机组合所获取的二维影像图在某些位置纹理不够清楚,则计算深度值时可能是失败的,也就是不能得出这些位置的深度值。本步骤中所确定的地面点如果属于这种情况,则舍弃该地面点。During specific implementation, if the texture of the two-dimensional image acquired by the combination of cameras is not clear enough at certain positions, the calculation of the depth value may fail, that is, the depth value of these positions cannot be obtained. If the ground point determined in this step belongs to this case, the ground point is discarded.
步骤S2143,利用所有地面点的二维坐标及其深度值进行三维平面拟合。Step S2143, using the two-dimensional coordinates of all ground points and their depth values to perform three-dimensional plane fitting.
步骤S2144,将拟合得到的面积最大的平面所对应的函数中的参数确定为采集该帧二维影像图的摄像机组合的部署参数。Step S2144, determining the parameters in the function corresponding to the fitted plane with the largest area as the deployment parameters of the camera combination that collects the two-dimensional image of the frame.
其中,部署参数反映了摄像机的安装位置、高度和角度等情况。Wherein, the deployment parameters reflect conditions such as the installation position, height and angle of the camera.
具体的,本步骤利用所有地面点的二维坐标及其深度值进行三维平面拟合时,受计算误差的影响,可能会得到面积大小不等的多个三维平面,其中,面积最大的三维平面最有可能对应于真实世界中的地面。摄像机所拍摄的真实世界中的地面能够反映出摄像机的安装位置、高度和角度等情况,而三维平面的函数参数则反映了三维平面的属性,因此,该面积最大的三维平面(对应于真实世界中的地面)所对应的函数中的参数(反映了真实世界中的地面的属性)能够反映摄像机的安装位置、高度和角度,从而可以将三维平面的函数参数确定为该摄像机组合的部署参数。Specifically, when using the two-dimensional coordinates of all ground points and their depth values for three-dimensional plane fitting in this step, affected by calculation errors, multiple three-dimensional planes with different areas may be obtained, among which the three-dimensional plane with the largest area Most likely corresponds to the ground in the real world. The ground in the real world captured by the camera can reflect the installation position, height and angle of the camera, while the function parameters of the 3D plane reflect the properties of the 3D plane. Therefore, the 3D plane with the largest area (corresponding to the real world The parameters in the function (reflecting the properties of the ground in the real world) can reflect the installation position, height and angle of the camera, so that the function parameters of the three-dimensional plane can be determined as the deployment parameters of the camera combination.
假设拟合得到的最大平面对应的函数为Ax+By+Cz+D=0,其中x、y、z为变量,A、B、C、D为参数,则参数A、B、C、D为该摄像机组合的部署参数。Assuming that the function corresponding to the largest plane obtained by fitting is Ax+By+Cz+D=0, where x, y, and z are variables, and A, B, C, and D are parameters, then the parameters A, B, C, and D are Deployment parameters for this camera combination.
具体实施时,本步骤可以采用开源的PCL(Point Cloud Library,参考网站http://pointclouds.org/)所公开的三维平面拟合方法。During specific implementation, this step can adopt the three-dimensional plane fitting method disclosed by the open source PCL (Point Cloud Library, refer to the website http://pointclouds.org/).
需要说明的是,本发明对采用的三维平面拟合方法不作限定,即以上说明仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,选择其它任何的三维平面拟合方法均应包含在本发明的保护范围之内,例如最小二乘法或者基于梯度下降等方法。It should be noted that the present invention does not limit the three-dimensional plane fitting method used, that is, the above description is only a specific embodiment of the present invention, and is not used to limit the protection scope of the present invention. Within the scope of the invention, any other three-dimensional plane fitting method should be included in the protection scope of the present invention, such as the least square method or methods based on gradient descent.
步骤S2145,以世界坐标系为基准,对所述部署参数进行标定计算,得到采集该帧二维影像图的摄像机组合的旋转变换矩阵和平移变换矩阵。In step S2145, the deployment parameters are calibrated and calculated based on the world coordinate system to obtain a rotation transformation matrix and a translation transformation matrix of the combination of cameras that collect the two-dimensional image of the frame.
步骤S215,将二维前景图像中全部二维前景点对应的三维前景点组成三维前景图像。Step S215, combining the 3D foreground points corresponding to all the 2D foreground points in the 2D foreground image to form a 3D foreground image.
事实上,三维前景图像是一种点云,这种点云对应的是二维前景图像在真实世界中所对应的物体。因此,步骤S215实际上是将点云数据集合形成点云的过程。In fact, the 3D foreground image is a kind of point cloud, which corresponds to the object corresponding to the 2D foreground image in the real world. Therefore, step S215 is actually a process of gathering point cloud data to form a point cloud.
步骤S22,计算真实世界中的点在当前摄像机组合在当前采集周期采集的多个二维前景图像中所对应位置的变化情况。其中,所述真实世界中的点为每个二维前景图像中的每个二维前景点在真实世界中对应的点。Step S22, calculating the change of the corresponding position of the point in the real world in the plurality of two-dimensional foreground images collected by the current camera combination in the current collection period. Wherein, the point in the real world is a point corresponding to each 2D foreground point in each 2D foreground image in the real world.
具体的,真实世界中的点在这多帧二维前景图像中所对应位置的变化情况可以通过对这多帧二维前景图像计算光流来得到。Specifically, changes in the corresponding positions of points in the real world in the multiple frames of two-dimensional foreground images can be obtained by calculating the optical flow for the multiple frames of two-dimensional foreground images.
光流是指空间物体表面上的点的运动速度在视觉传感器的成像平面上的表达。在本发明中,光流是指真实世界中物体表面上的点的运动速度在摄像机采集的二维影像图上的表达,当真实世界中物体表面上的点被限制为是二维前景图像中的二维前景点在真实世界中对应的点时,光流就是指真实世界中物体表面上的点的运动速度在不同采集时刻的二维前景图像上的表达。Optical flow refers to the expression of the motion speed of points on the surface of space objects on the imaging plane of the visual sensor. In the present invention, optical flow refers to the expression of the moving speed of the points on the object surface in the real world on the two-dimensional image map collected by the camera. When the points on the object surface in the real world are limited to be in the two-dimensional foreground image When the two-dimensional foreground point corresponds to the point in the real world, the optical flow refers to the expression of the moving speed of the point on the surface of the object in the real world on the two-dimensional foreground image at different acquisition times.
因此,本步骤可以通过对这多个二维前景图像计算光流,得到真实世界中的点(每个二维前景图像中的每个二维前景点在真实世界中对应的点)在这多个二维前景图像中所对应位置的变化情况。Therefore, this step can obtain the points in the real world (the points corresponding to each two-dimensional foreground point in each two-dimensional foreground image in the real world) by calculating the optical flow for these multiple two-dimensional foreground images. The change of the corresponding position in a two-dimensional foreground image.
可选地,本步骤在实现对当前摄像机组合在当前采集周期采集的多个二维前景图像计算光流这一过程时,可以首先对这多个二维前景图像对应的多帧二维影像图计算稠密光流,然后利用这多个二维前景图像对稠密光流进行过滤处理。Optionally, in this step, when realizing the process of calculating the optical flow of multiple 2D foreground images collected by the current camera combination in the current acquisition cycle, firstly, the multiple frames of 2D image images corresponding to the multiple 2D foreground images Calculate the dense optical flow, and then use the multiple two-dimensional foreground images to filter the dense optical flow.
具体的,本步骤可以采用Lucas–Kanade方法对多个二维影像图计算稠密光流,考虑到计算速度,还可以使用GPU(Graphics Processing Unit,图形处理器)来进行加速。其中,利用二维前景图像对稠密光流进行过滤处理,是为了去除每个二维影像图中背景部分(二维前景图像以外的部分)的光流信息,这里采用的过滤处理可以是高斯滤波、均值滤波或中值滤波等处理方式。Specifically, in this step, the Lucas–Kanade method can be used to calculate the dense optical flow for multiple two-dimensional image images. Considering the calculation speed, GPU (Graphics Processing Unit, graphics processing unit) can also be used to accelerate. Among them, using the two-dimensional foreground image to filter the dense optical flow is to remove the optical flow information of the background part (parts other than the two-dimensional foreground image) in each two-dimensional image image. The filtering process used here can be Gaussian filtering , mean filtering or median filtering and other processing methods.
步骤S23,根据真实世界中的点在所述多个二维前景图像中所对应位置的变化情况,以及根据所述二维前景图像与所述三维前景图像之间的对应关系,计算真实世界中的点在所述多个三维前景图像中所对应位置的变化情况。Step S23, according to the changes of the corresponding positions of the points in the real world in the plurality of two-dimensional foreground images, and according to the corresponding relationship between the two-dimensional foreground images and the three-dimensional foreground images, calculate Changes of the corresponding positions of the points in the plurality of three-dimensional foreground images.
需要说明的是,对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行步骤S21~步骤S23的图像处理过程时,为了确定真实世界中的点在各个采集时刻的二维前景图像和三维前景图像中所对应的位置,较佳的,这多帧二维影像图是由当前摄像机组合中的同一台摄像机采集的,以保证这多帧二维影像图所覆盖的场景尽量为同一场景,从而有利于在各个采集时刻的二维前景图像和三维前景图像中找到真实世界中的同一点。事实上,当每个摄像机组合是由地理位置相近的多台摄像机组合形成时,即便这多帧二维影像图是由当前摄像机组合中的不同摄像机采集的,这多帧二维影像图所覆盖的场景也是存在一定的交集的,也有利于在各个采集时刻的二维前景图像和三维前景图像中找到真实世界中的同一点。It should be noted that, when the image processing process of step S21 to step S23 is performed on the multi-frame two-dimensional image images sequentially acquired by the current camera combination in the current acquisition period, in order to determine the two-dimensional foreground of the point in the real world at each acquisition time The corresponding positions in the image and the three-dimensional foreground image, preferably, the multi-frame two-dimensional image map is collected by the same camera in the current camera combination, so as to ensure that the scene covered by the multi-frame two-dimensional image map is as large as possible. The same scene, which is conducive to finding the same point in the real world in the 2D foreground image and 3D foreground image at each acquisition moment. In fact, when each camera combination is formed by a combination of multiple cameras with similar geographic locations, even if the multi-frame 2D image maps are collected by different cameras in the current camera combination, the multi-frame 2D image maps covered by There is also a certain intersection of the scenes, which is also conducive to finding the same point in the real world in the 2D foreground image and 3D foreground image at each acquisition moment.
步骤S3,针对当前采集周期的各个采集时刻,按照将对应于真实世界中相同点的全部三维前景点融合成为一个三维融合前景点的规则,将同一采集时刻所述监控现场各个摄像机组合对应的三维前景图像中的三维前景点进行融合处理,将融合后得到的全部三维融合前景点组合在一起,形成该采集时刻的三维融合前景图。Step S3, for each collection moment of the current collection cycle, according to the rule of merging all 3D foreground points corresponding to the same point in the real world into a 3D fusion foreground point, combine the corresponding 3D points of each camera at the monitoring site at the same collection moment The 3D foreground points in the foreground image are fused, and all 3D fused foreground points obtained after fusion are combined to form a 3D fused foreground image at the acquisition moment.
由于单个摄像机组合所拍摄的场景并不能完全覆盖整个监控现场,为了得到整个监控现场的三维场景信息,本步骤将所有摄像机组合所拍摄的场景集成在一起,具体的手段是将同一采集时刻各个摄像机组合的三维前景图像融合在一起。Since the scene captured by a single camera combination cannot completely cover the entire monitoring site, in order to obtain the 3D scene information of the entire monitoring site, this step integrates the scenes captured by all camera combinations together. The combined 3D foreground images are fused together.
具体实施时,步骤S3可以按照步骤S31~步骤S34的过程进行:During specific implementation, step S3 can be carried out according to the process of step S31~step S34:
步骤S31,依次将当前采集周期的各个采集时刻作为当前采集时刻,依次选取每个摄像机组合,依次选取在当前采集时刻当前选取的摄像机组合对应的三维前景图像中的每个三维前景点作为当前三维前景点。Step S31, taking each collection time of the current collection cycle as the current collection time in turn, selecting each camera combination in turn, and sequentially selecting each 3D foreground point in the 3D foreground image corresponding to the currently selected camera combination at the current collection time as the current 3D Foreground point.
步骤S32,判断在当前采集时刻所述监控现场其他摄像机组合对应的三维前景图像中,是否存在与当前三维前景点对应于真实世界中的同一个点的三维前景点。Step S32 , judging whether there is a 3D foreground point corresponding to the same point in the real world as the current 3D foreground point in the 3D foreground image corresponding to other camera combinations at the monitoring site at the current acquisition moment.
具体的,该步骤在判断两个三维前景点是否对应于真实世界中的同一个点时,是根据这两个三维前景点在世界坐标系中的欧式距离是否小于一给定的阈值判断的,当小于该阈值时,判断这两个三维前景点是对应于真实世界中的同一个点,否则判断不是对应于真实世界中的同一个点。Specifically, in this step, when judging whether two 3D foreground points correspond to the same point in the real world, it is judged according to whether the Euclidean distance of the two 3D foreground points in the world coordinate system is less than a given threshold, When it is smaller than the threshold, it is judged that the two three-dimensional foreground points correspond to the same point in the real world; otherwise, it is judged that they do not correspond to the same point in the real world.
具体实施时,也可以将两个三维前景点的颜色差异与欧式距离进行加权计算,然后判断计算结果是否小于一给定的阈值,当小于该阈值时,判断这两个三维前景点是对应于真实世界中的同一个点,否则判断不是对应于真实世界中的同一个点。During specific implementation, the color difference and the Euclidean distance of the two 3D foreground points can also be weighted for calculation, and then it is judged whether the calculation result is less than a given threshold, and when it is less than the threshold, it is judged whether the two 3D foreground points correspond to the same point in the real world, otherwise the judgment does not correspond to the same point in the real world.
步骤S33,如果不存在,则将当前三维前景点确定为三维融合前景点;如果存在,则按照如下公式将对应于真实世界中的同一个点的全部三维前景点融合成为一个三维融合前景点:Step S33, if it does not exist, determine the current 3D foreground point as a 3D fused foreground point; if it exists, fuse all 3D foreground points corresponding to the same point in the real world into one 3D fused foreground point according to the following formula:
其中,U、V、Z是三维融合前景点的三维坐标,分别对应于世界坐标系的三个坐标轴;将对应于真实世界中的同一个点的各个三维前景点确定为待融合点,N是对应于真实世界中的同一个点的全部待融合点的个数;n是待融合点的序号;(UWn,VWn,ZWn)是序号为n的待融合点的三维坐标;weightn是序号为n的待融合点的权重;distn是序号为n的待融合点到自身对应的摄像机组合的中心坐标的距离;其中,摄像机组合的中心坐标为摄像机组合的各个摄像机的装设位置在世界坐标系中的各个投影点的中心对称点的坐标。Among them, U, V, and Z are the three-dimensional coordinates of the three-dimensional fusion foreground point, corresponding to the three coordinate axes of the world coordinate system respectively; each three-dimensional foreground point corresponding to the same point in the real world is determined as the point to be fused, N is the number of all points to be fused corresponding to the same point in the real world; n is the serial number of the point to be fused; (U Wn , V Wn , Z Wn ) is the three-dimensional coordinates of the point to be fused with the serial number n; weight n is the weight of the point to be fused with the sequence number n; dist n is the distance from the point to be fused with the sequence number n to the center coordinate of the camera combination corresponding to itself; wherein, the center coordinate of the camera combination is the installation of each camera of the camera combination The coordinates of the central symmetric point of each projected point in the world coordinate system.
当每个摄像机组合是由两个摄像机组成时,摄像机组合的中心坐标为摄像机组合中这两个摄像机的装设位置在世界坐标系中的两个投影点的中点的坐标。When each camera combination is composed of two cameras, the central coordinates of the camera combination are the coordinates of the midpoint of the two projection points of the installation positions of the two cameras in the camera combination in the world coordinate system.
例如,对应于真实世界中的同一个点的两个三维前景点分别记为待融合点A和B,其中,待融合点A来自摄像机组合(a,b),待融合点B来自摄像机组合(b,c),摄像机组合(a,b)的中心坐标为(U1,V1,Z1),摄像机组合(b,c)的中心坐标为(U2,V2,Z2),待融合点A的三维坐标为(UW1,VW1,ZW1),待融合点B的三维坐标为(UW2,VW2,ZW2),待融合点A到摄像机组合(a,b)的中心坐标之间的距离为dist1,待融合点B到摄像机组合(b,c)的中心坐标之间的距离为dist2,则有:For example, two 3D foreground points corresponding to the same point in the real world are recorded as points to be fused A and B respectively, where point A to be fused comes from the camera combination (a,b), and point B to be fused comes from the camera combination ( b,c), the center coordinates of the camera combination (a,b) are (U 1 ,V 1 ,Z 1 ), the center coordinates of the camera combination (b,c) are (U 2 ,V 2 ,Z 2 ), to be The three-dimensional coordinates of the fusion point A are (U W1 , V W1 , Z W1 ), the three-dimensional coordinates of the fusion point B are (U W2 , V W2 , Z W2 ), the distance between the fusion point A and the camera combination (a, b) The distance between the center coordinates is dist 1 , and the distance between the point B to be fused and the center coordinates of the camera combination (b,c) is dist 2 , then:
待融合点A的权重为weight1,待融合点B的权重为weight2,则有:The weight of point A to be fused is weight 1 , and the weight of point B to be fused is weight 2 , then:
步骤S34,将全部的三维融合前景点组成当前采集时刻对应的三维融合前景图。In step S34, all the 3D fused foreground points are combined into a 3D fused foreground map corresponding to the current acquisition moment.
步骤S4,根据真实世界中的点在所述多个三维前景图像中所对应位置的变化情况,以及根据每个三维融合前景点与融合成每个三维融合前景点的各个三维前景点的对应关系,计算真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况。Step S4, according to the change of the corresponding positions of points in the real world in the plurality of 3D foreground images, and according to the corresponding relationship between each 3D fused foreground point and each 3D foreground point fused into each 3D fused foreground point , calculating the change of the corresponding position of the point in the real world in the multiple 3D fusion foreground images at each acquisition moment of the current acquisition cycle.
步骤S5,根据当前采集周期的各个采集时刻的多个三维融合前景图,以及真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定目标对象和所述目标对象的空间分布特征和运行分布特征。Step S5, according to the multiple 3D fusion foreground maps at each collection time of the current collection cycle, and the changes in the corresponding positions of the points in the real world in the multiple 3D fusion foreground maps at each collection time of the current collection cycle, determine the target Spatial distribution characteristics and operational distribution characteristics of objects and the target object.
具体的,步骤S5可以按照步骤S51-a~步骤S57-a的过程进行:Specifically, step S5 can be performed according to the process of step S51-a to step S57-a:
步骤S51-a,依次选取当前采集周期的各个采集时刻的三维融合前景图。Step S51-a, sequentially select the 3D fusion foreground images at each collection time in the current collection period.
步骤S52-a,将当前选取的三维融合前景图所在的三维空间划分为若干三维子空间,统计以下各项信息中的一种或多种:每个三维子空间中包含的所有三维融合前景点的数量;每个三维子空间中包含的所有三维融合前景点呈现的最多的颜色信息;每个三维子空间中包含的所有三维融合前景点的最大高度。Step S52-a, divide the 3D space where the currently selected 3D fusion foreground image is located into several 3D subspaces, and count one or more of the following information: all 3D fusion foreground points contained in each 3D subspace The number of ; the maximum color information presented by all 3D fusion foreground points contained in each 3D subspace; the maximum height of all 3D fusion foreground points contained in each 3D subspace.
具体的,本步骤可以是利用具有预设尺寸大小的长方体(bins)对当前选取的三维融合前景图所在的三维空间进行划分。Specifically, this step may be to use cuboids (bins) with preset sizes to divide the three-dimensional space where the currently selected three-dimensional fusion foreground image is located.
具体实施时,为了方便后续利用统计结果,可以将统计结果以直方图的形式表示出来,即横坐标为每个三维子空间的三维坐标数据,纵坐标为统计数据。During specific implementation, in order to facilitate subsequent use of the statistical results, the statistical results can be expressed in the form of a histogram, that is, the abscissa is the three-dimensional coordinate data of each three-dimensional subspace, and the vertical coordinate is the statistical data.
步骤S53-a,根据统计结果,将满足聚类条件的三维子空间中包含的所有三维融合前景点聚集形成三维融合前景块。Step S53-a, according to the statistical results, gather all the 3D fused foreground points contained in the 3D subspace satisfying the clustering condition to form a 3D fused foreground block.
其中,所述聚类条件为空间距离小于第一预设阈值,且统计数据的差值小于第二预设阈值。也就是说,满足聚类条件的三维子空间必须是空间距离小于第一预设阈值且统计数据的差值小于第二预设阈值的三维子空间。Wherein, the clustering condition is that the spatial distance is smaller than a first preset threshold, and the difference of statistical data is smaller than a second preset threshold. That is to say, the three-dimensional subspace satisfying the clustering condition must be a three-dimensional subspace whose spatial distance is smaller than a first preset threshold and whose statistical data difference is smaller than a second preset threshold.
具体的,本步骤将满足聚类条件的三维子空间中包含的所有三维融合前景点聚集形成三维融合前景块时,可以采用聚类操作的方法,例如连通域分析,均值漂移算法等。Specifically, in this step, when all the 3D fused foreground points contained in the 3D subspace satisfying the clustering condition are aggregated to form a 3D fused foreground block, a clustering operation method may be used, such as connected domain analysis, mean shift algorithm, and the like.
步骤S54-a,在世界坐标系中对形成的三维融合前景块进行模板匹配,判断所述三维融合前景块是否为目标对象。Step S54-a, performing template matching on the formed 3D fused foreground block in the world coordinate system, and judging whether the 3D fused foreground block is the target object.
具体的,可以先根据目标对象设计相应的模板,然后利用这些模板对三维融合前景块进行匹配,当匹配成功时,确定三维融合前景块为目标对象。Specifically, corresponding templates may be designed according to the target object, and then these templates are used to match the 3D fused foreground block. When the matching is successful, the 3D fused foreground block is determined as the target object.
具体实施时,本步骤可根据监控现场的特点,确定目标对象,例如对于油气井监控现场,可以将人、车、人群等作为目标对象。During specific implementation, this step can determine the target object according to the characteristics of the monitoring site. For example, for the oil and gas well monitoring site, people, vehicles, crowds, etc. can be used as the target objects.
步骤S55-a,当确定所述三维融合前景块为目标对象时,根据真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定当前采集周期中不同采集时刻的三维融合前景图像中,聚集形成所述目标对象的每个三维融合前景点及与其对应于真实世界中的相同点的其他三维融合前景点的三维坐标变化情况,并据此确定在当前采集周期的各个采集时刻所述目标对象的位置及位移情况。Step S55-a, when it is determined that the 3D fused foreground block is the target object, according to the change of the corresponding position of the point in the real world in the multiple 3D fused foreground images at each acquisition time of the current acquisition cycle, determine the current acquisition In the 3D fused foreground images at different acquisition moments in the cycle, gather and form each 3D fused foreground point of the target object and the 3D coordinate changes of other 3D fused foreground points corresponding to the same point in the real world, and based on this Determine the position and displacement of the target object at each acquisition moment in the current acquisition cycle.
步骤S56-a,将在当前采集周期的各个采集时刻所述目标对象的位置确定为所述目标对象的空间分布特征。Step S56-a, determining the position of the target object at each collection moment in the current collection cycle as the spatial distribution feature of the target object.
步骤S57-a,将在当前采集周期的各个采集时刻所述目标对象的位移情况确定为所述目标对象的运行分布特征。Step S57-a, determining the displacement of the target object at each collection moment in the current collection cycle as the running distribution feature of the target object.
具体的,目标对象的运行分布特征包括但不限于目标对象的运动幅值、方向及加速度等信息。Specifically, the running distribution characteristics of the target object include but not limited to information such as the motion amplitude, direction, and acceleration of the target object.
上述过程是在世界坐标系中对形成的三维融合前景块进行模板匹配,这种模板匹配方法由于是在三维空间进行的,会涉及较大的运算量,耗时较长,为了减少运算量,提升运算速度,可选地,本步骤还可以将三维融合前景图投影至二维平面上得到融合前景投影图,然后在二维空间中进行模板匹配,具体的,步骤S5还可以按照步骤S51-b~步骤S58-b的过程进行:The above process is to perform template matching on the formed 3D fusion foreground block in the world coordinate system. Since this template matching method is carried out in 3D space, it will involve a large amount of computation and take a long time. In order to reduce the amount of computation, To improve the calculation speed, optionally, this step can also project the 3D fused foreground map onto a 2D plane to obtain the fused foreground projection map, and then perform template matching in the 2D space. Specifically, step S5 can also follow step S51- The process of b~step S58-b is carried out:
步骤S51-b,依次选取当前采集周期的各个采集时刻的三维融合前景图。Step S51-b, sequentially selecting the 3D fusion foreground images at each collection time in the current collection period.
步骤S52-b,将当前选取的三维融合前景图中的每个三维融合前景点投影至一二维平面中,得到融合前景投影点;将当前选取的三维融合前景图中的全部三维融合前景点对应的融合前景投影点组合在一起,得到融合前景投影图。Step S52-b, project each 3D fused foreground point in the currently selected 3D fused foreground image onto a two-dimensional plane to obtain fused foreground projection points; The corresponding fused foreground projection points are combined to obtain a fused foreground projection map.
本步骤可以是投影至水平面上,也可以是投影至铅垂面上。This step can be projected onto a horizontal plane or projected onto a vertical plane.
步骤S53-b,将所述融合前景投影图所在的二维空间划分为若干二维子空间,统计以下各项信息中的一种或多种:每个二维子空间中包含的融合前景投影点的数量;每个二维子空间中包含的所有融合前景投影点呈现的最多的颜色信息;每个二维子空间中包含的所有融合前景投影点的最大高度。Step S53-b, dividing the two-dimensional space where the fused foreground projection image is located into several two-dimensional subspaces, and counting one or more of the following information: the fused foreground projection contained in each two-dimensional subspace The number of points; the maximum color information presented by all fused foreground projection points contained in each two-dimensional subspace; the maximum height of all fused foreground projection points contained in each two-dimensional subspace.
步骤S54-b,根据统计结果,将满足聚类条件的二维子空间中包含的所有融合前景投影点聚集形成二维融合前景块;其中,所述聚类条件为空间距离小于第一预设阈值,且统计数据的差值小于第二预设阈值。Step S54-b, according to the statistical results, gather all the fused foreground projection points contained in the two-dimensional subspace satisfying the clustering condition to form a two-dimensional fused foreground block; wherein, the clustering condition is that the spatial distance is less than the first preset threshold, and the statistical data difference is smaller than a second preset threshold.
步骤S55-b,在所述二维空间中对形成的二维融合前景块进行模板匹配,判断所述二维融合前景块是否为目标对象。Step S55-b, performing template matching on the formed 2D fused foreground block in the 2D space, and judging whether the 2D fused foreground block is a target object.
步骤S56-b,当确定所述二维融合前景块为目标对象时,根据真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定当前采集周期中不同采集时刻的三维融合前景图像对应的融合前景投影图中,聚集形成所述目标对象的每个融合前景投影点及与其对应于真实世界中的相同点的其他融合前景投影点的二维坐标变化情况,并据此确定在当前采集周期的各个采集时刻所述目标对象的位置及位移情况。Step S56-b, when it is determined that the 2D fused foreground block is the target object, determine the current In the fused foreground projection map corresponding to the 3D fused foreground images at different acquisition moments in the acquisition cycle, each fused foreground projection point forming the target object and other fused foreground projection points corresponding to the same point in the real world are gathered together. dimensional coordinate changes, and accordingly determine the position and displacement of the target object at each acquisition moment in the current acquisition cycle.
步骤S57-b,将在当前采集周期的各个采集时刻所述目标对象的位置确定为所述目标对象的空间分布特征。Step S57-b, determining the position of the target object at each collection moment in the current collection cycle as the spatial distribution feature of the target object.
步骤S58-b,将在当前采集周期的各个采集时刻所述目标对象的位移情况确定为所述目标对象的运行分布特征。Step S58-b, determining the displacement of the target object at each collection moment in the current collection cycle as the running distribution feature of the target object.
步骤S6,根据目标对象的空间分布特征和运行分布特征,判断是否有目标情景出现。Step S6, according to the spatial distribution characteristics and operation distribution characteristics of the target object, it is judged whether there is a target scene.
具体实施时,可根据监控现场的特点,确定目标情景,例如对于油气井监控现场,可以将是否有人入侵、有人移除物体、有人丢弃物体、有人奔跑、有人打架等作为目标情景。During specific implementation, the target scenario can be determined according to the characteristics of the monitoring site. For example, for an oil and gas well monitoring site, whether someone invades, someone removes an object, someone discards an object, someone runs, someone fights, etc. can be used as the target scenario.
具体的,本步骤可以先对所述目标对象的空间分布特征和运行分布特征进行模式分类;然后,根据模式分类的结果判断是否有目标情景出现。Specifically, in this step, pattern classification may be performed on the spatial distribution characteristics and operation distribution characteristics of the target object; then, whether there is a target scenario is judged according to the result of the pattern classification.
本步骤所采用的模式分类方法可以是基于多分类器完成,例如随机森林,利用其直接完成特征到多类别的映射,也可以基于多个单分类器来完成,例如多个SVM(SupportVector Machine,支持向量机)分类器,例如某个SVM分类器来进行打架检测,其将打架作为正例,其他情境作为负例,以此完成打架分类器的训练,以此类推。The pattern classification method adopted in this step can be completed based on multiple classifiers, such as random forests, which can be used to directly complete the mapping of features to multiple categories, or can be completed based on multiple single classifiers, such as multiple SVMs (SupportVector Machine, Support Vector Machine) classifier, such as a certain SVM classifier to detect fights, which takes fights as positive examples and other situations as negative examples to complete the training of the fight classifier, and so on.
例如,基于随机森林进行多异常类型的判定的示意图,将目标对象的空间分布特征和运动分布特征作为输入,通过样本的学习,确定多个树的模型,基于多个树模型来进行预测和加权投票,得到当前检测的空间分布特征或运动分布特征对应的异常类型。For example, a schematic diagram of judging multiple abnormal types based on random forest, using the spatial distribution characteristics and motion distribution characteristics of the target object as input, through sample learning, multiple tree models are determined, and prediction and weighting are performed based on multiple tree models Vote to get the abnormal type corresponding to the currently detected spatial distribution feature or motion distribution feature.
步骤S7,输出判断结果,并在有目标情景出现时报警。Step S7, outputting the judgment result, and alarming when a target situation occurs.
具体的,为了方便监控人员及时了解监控现场的情况,本发明可以通过显示或发声等方式直接输出判断结果,也可以通过网络将判断结果上传至网络平台中,供连接该网络平台的计算机或移动终端(例如手机、便携式笔记本电脑等)浏览。Specifically, in order to facilitate the monitoring personnel to know the situation of the monitoring site in a timely manner, the present invention can directly output the judgment result by means of display or sound, and can also upload the judgment result to the network platform through the network for computers or mobile devices connected to the network platform. Terminal (such as mobile phone, portable notebook computer, etc.) browsing.
应当注意,尽管在附图中以特定顺序描述了本发明多摄像机协同监控方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。It should be noted that although the operations of the multi-camera collaborative monitoring method of the present invention are described in a specific order in the accompanying drawings, this does not require or imply that these operations must be performed in this specific order, or that all shown operations must be performed to achieve achieve the desired result. Additionally or alternatively, certain steps may be omitted, multiple steps may be combined into one step for execution, and/or one step may be decomposed into multiple steps for execution.
示例性装置Exemplary device
在介绍了本发明提供的多摄像机协同监控方法之后,接下来,参考图5~图6介绍本发明提供的多摄像机协同监控装置。After introducing the multi-camera cooperative monitoring method provided by the present invention, next, the multi-camera cooperative monitoring device provided by the present invention will be introduced with reference to FIG. 5 to FIG. 6 .
如图5所示为该多摄像机协同监控装置的输入、输出示意图。其中,输入是监控现场所有摄像机采集的二维影像图,输出是检测到的目标情景(例如有人入侵、打架、奔跑、盗窃等异常情况)和报警信息。FIG. 5 is a schematic diagram of input and output of the multi-camera collaborative monitoring device. Among them, the input is the two-dimensional image image collected by all the cameras on the monitoring site, and the output is the detected target scene (such as intrusion, fight, running, theft, etc.) and alarm information.
具体实施本发明时,该多摄像机协同监控装置可以通过RTU(RemoteTerminalUnit,远程终端单元)监测和控制监控现场的所有摄像机,其中,这些摄像机均是基于NTP协议采集图像的。When the present invention is specifically implemented, the multi-camera cooperative monitoring device can monitor and control all cameras on the monitoring site through RTU (Remote Terminal Unit, remote terminal unit), wherein, these cameras all collect images based on the NTP protocol.
如图6所示为该多摄像机协同监控装置的结构框图,该多摄像机协同监控装置包括:As shown in Figure 6, it is a structural block diagram of the multi-camera cooperative monitoring device, and the multi-camera cooperative monitoring device includes:
摄像机划分模块601,用于按照至少两台摄像机组成一个摄像机组合的方式,将部署于监控现场的多台摄像机组合成多个摄像机组合;The camera division module 601 is configured to combine multiple cameras deployed at the monitoring site into multiple camera combinations in such a way that at least two cameras form a camera combination;
影像图获取模块602,用于获取每个摄像机组合在每个采集周期采集的二维影像图;An image map acquisition module 602, configured to obtain a two-dimensional image map collected by each camera combination in each acquisition cycle;
图像处理模块603,用于对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行图像处理;The image processing module 603 is used to perform image processing on the multi-frame two-dimensional image images sequentially collected by the current camera combination in the current collection cycle;
所述图像处理模块603进一步包括:The image processing module 603 further includes:
第一图像处理模块604,用于对所述多帧二维影像图提取前景,得到多个二维前景图像;The first image processing module 604 is configured to extract foregrounds from the multi-frame two-dimensional image images to obtain multiple two-dimensional foreground images;
第二图像处理模块605,用于将所述多个二维前景图像投影至三维空间中,得到多个三维前景图像;其中,当前采集周期的各个采集时刻、所述多帧二维影像图、所述多个二维前景图像和所述多个三维前景图像具有一一对应的关系;The second image processing module 605 is configured to project the plurality of two-dimensional foreground images into a three-dimensional space to obtain a plurality of three-dimensional foreground images; wherein, each collection moment of the current collection period, the multiple frames of two-dimensional image images, The plurality of two-dimensional foreground images and the plurality of three-dimensional foreground images have a one-to-one correspondence;
第三图像处理模块606,用于计算真实世界中的点在所述多个二维前景图像中所对应位置的变化情况;其中,所述真实世界中的点为所述二维前景图像中的每个二维前景点在真实世界中对应的点;所述二维前景点为所述二维前景图像中的像素点;The third image processing module 606 is configured to calculate the change of the corresponding positions of the points in the real world in the plurality of two-dimensional foreground images; wherein, the points in the real world are points in the two-dimensional foreground images The point corresponding to each two-dimensional foreground point in the real world; the two-dimensional foreground point is a pixel in the two-dimensional foreground image;
第四图像处理模块607,用于根据真实世界中的点在所述多个二维前景图像中所对应位置的变化情况,以及根据所述二维前景图像与所述三维前景图像之间的对应关系,计算真实世界中的点在所述多个三维前景图像中所对应位置的变化情况;The fourth image processing module 607 is configured to, according to the changes of the corresponding positions of the points in the real world in the multiple 2D foreground images, and according to the correspondence between the 2D foreground images and the 3D foreground images relationship, calculating changes in the corresponding positions of points in the real world in the plurality of three-dimensional foreground images;
融合处理模块608,用于针对当前采集周期的各个采集时刻,按照将对应于真实世界中相同点的全部三维前景点融合成为一个三维融合前景点的规则,将同一采集时刻所述监控现场各个摄像机组合对应的三维前景图像中的三维前景点进行融合处理,将融合后得到的全部三维融合前景点组合在一起,形成该采集时刻的三维融合前景图;其中,所述三维前景点为所述三维前景图像的体素点;The fusion processing module 608 is used for combining all the 3D foreground points corresponding to the same point in the real world into one 3D fused foreground point at each collection moment of the current collection cycle, and combining the various cameras at the monitoring site at the same collection moment Combining the 3D foreground points in the corresponding 3D foreground image for fusion processing, combining all the 3D fused foreground points obtained after fusion to form a 3D fused foreground map at the acquisition moment; wherein, the 3D foreground points are the 3D Voxel points of the foreground image;
位移计算模块609,用于根据真实世界中的点在所述多个三维前景图像中所对应位置的变化情况,以及根据每个三维融合前景点与融合成每个三维融合前景点的各个三维前景点的对应关系,计算真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况;The displacement calculation module 609 is configured to change the corresponding positions of points in the real world in the plurality of 3D foreground images, and according to each 3D fused foreground point and each 3D foreground point fused into each 3D fused foreground point Correspondence of scenic spots, calculate the changes of the corresponding positions of points in the real world in the multiple 3D fusion foreground images at each collection moment of the current collection cycle;
目标搜索模块610,用于根据当前采集周期的各个采集时刻的多个三维融合前景图,以及真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定目标对象,以及确定所述目标对象的空间分布特征和运行分布特征;The target search module 610 is configured to use multiple 3D fused foreground images at each acquisition moment in the current acquisition period, and changes in the corresponding positions of points in the real world in the multiple 3D fused foreground images at each acquisition moment in the current acquisition period situation, determine the target object, and determine the spatial distribution characteristics and operational distribution characteristics of the target object;
判断模块611,用于根据所述目标对象的空间分布特征和运行分布特征,判断是否有目标情景出现;Judging module 611, configured to judge whether there is a target scene according to the spatial distribution characteristics and operation distribution characteristics of the target object;
输出模块612,用于输出判断结果,并在有目标情景出现时报警。The output module 612 is used to output the judgment result and give an alarm when a target situation occurs.
第二图像处理模块605,进一步包括:二维坐标计算子模块、深度值计算子模块、第一投影子模块、第二投影子模块和三维前景点云处理子模块。The second image processing module 605 further includes: a 2D coordinate calculation submodule, a depth value calculation submodule, a first projection submodule, a second projection submodule, and a 3D foreground point cloud processing submodule.
二维坐标计算子模块,用于在所述二维前景图像对应的二维影像图中,确定每个二维前景点的二维坐标;A two-dimensional coordinate calculation submodule, configured to determine the two-dimensional coordinates of each two-dimensional foreground point in the two-dimensional image map corresponding to the two-dimensional foreground image;
深度值计算子模块,用于在所述二维前景图像对应的二维深度图中,确定每个二维前景点的深度值;其中,所述二维深度图是利用当前摄像机组合中各台摄像机同时采集的二维影像图计算的所述对应的二维影像图中各个像素点的深度值形成;The depth value calculation sub-module is used to determine the depth value of each two-dimensional foreground point in the two-dimensional depth map corresponding to the two-dimensional foreground image; wherein, the two-dimensional depth map uses each camera in the current combination The depth value of each pixel in the corresponding two-dimensional image image calculated by the two-dimensional image image collected by the camera at the same time is formed;
第一投影子模块,用于根据所述二维前景点的二维坐标和深度值,将所述二维前景点投影到摄像机坐标系中;The first projection submodule is configured to project the two-dimensional foreground point into the camera coordinate system according to the two-dimensional coordinates and the depth value of the two-dimensional foreground point;
第二投影子模块,用于将所述二维前景点在所述摄像机坐标系中的投影点继续投影到世界坐标系中,将在世界坐标系中得到的投影点确定为所述三维前景点;The second projection sub-module is used to continue projecting the projection point of the two-dimensional foreground point in the camera coordinate system into the world coordinate system, and determine the projection point obtained in the world coordinate system as the three-dimensional foreground point ;
三维前景点云处理子模块,用于将所述二维前景图像中全部二维前景点对应的三维前景点组成所述三维前景图像。The 3D foreground point cloud processing sub-module is configured to compose the 3D foreground image with the 3D foreground points corresponding to all the 2D foreground points in the 2D foreground image.
第二图像处理模块606是通过对所述多个二维前景图像计算光流,得到真实世界中的点在所述多个二维前景图像中所对应位置的变化情况。The second image processing module 606 calculates the optical flow for the multiple 2D foreground images to obtain the changes of the corresponding positions of the points in the real world in the multiple 2D foreground images.
融合处理模块608,进一步包括:第一轮询子模块、查找子模块、第一融合子模块、第二融合子模块和三维融合前景点云处理子模块。The fusion processing module 608 further includes: a first polling submodule, a search submodule, a first fusion submodule, a second fusion submodule, and a 3D fusion foreground point cloud processing submodule.
第一轮询子模块,用于依次将当前采集周期的各个采集时刻作为当前采集时刻,依次选取每个摄像机组合,依次选取在当前采集时刻当前选取的摄像机组合对应的三维前景图像中的每个三维前景点作为当前三维前景点;The first polling sub-module is used to sequentially take each collection moment of the current collection cycle as the current collection moment, select each camera combination in turn, and select in turn each of the three-dimensional foreground images corresponding to the camera combination currently selected at the current collection moment. The 3D foreground point is used as the current 3D foreground point;
查找子模块,用于判断在当前采集时刻所述监控现场其他摄像机组合对应的三维前景图像中,是否存在与当前三维前景点对应于真实世界中的同一个点的三维前景点;当判断为不存在时,触发第一融合子模块;当判断为存在时,触发第二融合子模块;The search sub-module is used to judge whether there is a 3D foreground point corresponding to the same point in the real world as the current 3D foreground point in the 3D foreground image corresponding to other camera combinations at the monitoring site at the current collection moment; When it exists, trigger the first fusion submodule; when it is judged to exist, trigger the second fusion submodule;
第一融合子模块,用于将当前三维前景点确定为三维融合前景点;The first fusion submodule is used to determine the current 3D foreground point as a 3D fusion foreground point;
第二融合子模块,用于按照如下公式将对应于真实世界中的同一个点的全部三维前景点融合成为一个三维融合前景点:The second fusion sub-module is used to fuse all 3D foreground points corresponding to the same point in the real world into a 3D fusion foreground point according to the following formula:
其中,U、V、Z是三维融合前景点的三维坐标,分别对应于世界坐标系的三个坐标轴;将对应于真实世界中的同一个点的各个三维前景点确定为待融合点,N是对应于真实世界中的同一个点的全部待融合点的个数;n是待融合点的序号;(UWn,VWn,ZWn)是序号为n的待融合点的三维坐标;weightn是序号为n的待融合点的权重;distn是序号为n的待融合点到自身对应的摄像机组合的中心坐标的距离;其中,摄像机组合的中心坐标为摄像机组合的各个摄像机的装设位置在世界坐标系中的各个投影点的中心对称点的坐标;Among them, U, V, and Z are the three-dimensional coordinates of the three-dimensional fusion foreground point, corresponding to the three coordinate axes of the world coordinate system respectively; each three-dimensional foreground point corresponding to the same point in the real world is determined as the point to be fused, N is the number of all points to be fused corresponding to the same point in the real world; n is the serial number of the point to be fused; (U Wn , V Wn , Z Wn ) is the three-dimensional coordinates of the point to be fused with the serial number n; weight n is the weight of the point to be fused with the sequence number n; dist n is the distance from the point to be fused with the sequence number n to the center coordinate of the camera combination corresponding to itself; wherein, the center coordinate of the camera combination is the installation of each camera of the camera combination The coordinates of the central symmetric point of each projected point in the world coordinate system;
三维融合前景点云处理子模块,用于将全部的三维融合前景点组成当前采集时刻对应的三维融合前景图。The 3D fused foreground point cloud processing sub-module is used to form all 3D fused foreground points into a 3D fused foreground map corresponding to the current acquisition time.
在一种实施例中,目标搜索模块610进一步包括:第二轮询子模块、三维子空间划分子模块、三维聚类处理子模块、三维模板匹配子模块、三维位置及位移计算子模块、三维空间特征计算子模块和三维运行特征计算子模块。In one embodiment, the target search module 610 further includes: a second polling submodule, a 3D subspace division submodule, a 3D clustering processing submodule, a 3D template matching submodule, a 3D position and displacement calculation submodule, a 3D Spatial feature calculation sub-module and three-dimensional running feature calculation sub-module.
第二轮询子模块,用于依次选取当前采集周期的各个采集时刻的三维融合前景图;The second polling sub-module is used to sequentially select the three-dimensional fusion foreground map at each acquisition moment of the current acquisition cycle;
三维子空间划分子模块,用于将当前选取的三维融合前景图所在的三维空间划分为若干三维子空间,统计以下各项信息中的一种或多种:每个三维子空间中包含的所有三维融合前景点的数量;每个三维子空间中包含的所有三维融合前景点呈现的最多的颜色信息;每个三维子空间中包含的所有三维融合前景点的最大高度;The three-dimensional subspace division sub-module is used to divide the three-dimensional space where the currently selected three-dimensional fusion foreground image is located into several three-dimensional subspaces, and count one or more of the following information: all the information contained in each three-dimensional subspace The number of 3D fusion foreground points; the maximum color information presented by all 3D fusion foreground points contained in each 3D subspace; the maximum height of all 3D fusion foreground points contained in each 3D subspace;
三维聚类处理子模块,用于根据统计结果,将满足聚类条件的三维子空间中包含的所有三维融合前景点聚集形成三维融合前景块;其中,所述聚类条件为空间距离小于第一预设阈值,且统计数据的差值小于第二预设阈值;The three-dimensional clustering processing sub-module is used to gather all the three-dimensional fusion foreground points contained in the three-dimensional subspace satisfying the clustering condition according to the statistical results to form a three-dimensional fusion foreground block; wherein, the clustering condition is that the spatial distance is less than the first a preset threshold, and the statistical data difference is smaller than a second preset threshold;
三维模板匹配子模块,用于在世界坐标系中对形成的三维融合前景块进行模板匹配,判断所述三维融合前景块是否为目标对象;The three-dimensional template matching submodule is used to perform template matching on the formed three-dimensional fusion foreground block in the world coordinate system, and judge whether the three-dimensional fusion foreground block is a target object;
三维位置及位移计算子模块,用于当确定所述三维融合前景块为目标对象时,根据真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定当前采集周期中不同采集时刻的三维融合前景图像中,聚集形成所述目标对象的每个三维融合前景点及与其对应于真实世界中的相同点的其他三维融合前景点的三维坐标变化情况,并据此确定在当前采集周期的各个采集时刻所述目标对象的位置及位移情况;The three-dimensional position and displacement calculation sub-module is used for determining the three-dimensional fusion foreground block as the target object, according to the change of the corresponding position of the point in the real world in the multiple three-dimensional fusion foreground images at each acquisition time of the current acquisition cycle Determine the 3D coordinate changes of each 3D fused foreground point that forms the target object and other 3D fused foreground points corresponding to the same point in the real world in the 3D fused foreground image at different acquisition moments in the current acquisition cycle situation, and accordingly determine the position and displacement of the target object at each collection moment in the current collection cycle;
三维空间特征计算子模块,用于将在当前采集周期的各个采集时刻所述目标对象的位置确定为所述目标对象的空间分布特征;A three-dimensional spatial feature calculation submodule, configured to determine the position of the target object at each acquisition moment in the current acquisition cycle as the spatial distribution feature of the target object;
三维运行特征计算子模块,用于将在当前采集周期的各个采集时刻所述目标对象的位移情况确定为所述目标对象的运行分布特征。The three-dimensional operating feature calculation sub-module is used to determine the displacement of the target object at each acquisition moment in the current acquisition cycle as the operating distribution feature of the target object.
在另一种实施例中,目标搜索模块610进一步包括:第三轮询子模块、二维投影子模块、二维子空间划分子模块、二维聚类处理子模块、二维模板匹配子模块、二维位置及位移计算子模块、二维空间特征计算子模块和二维运行特征计算子模块。In another embodiment, the target search module 610 further includes: a third polling submodule, a two-dimensional projection submodule, a two-dimensional subspace division submodule, a two-dimensional clustering processing submodule, and a two-dimensional template matching submodule , a two-dimensional position and displacement calculation sub-module, a two-dimensional spatial feature calculation sub-module and a two-dimensional running feature calculation sub-module.
第三轮询子模块,用于依次选取当前采集周期的各个采集时刻的三维融合前景图;The third polling sub-module is used to sequentially select the three-dimensional fusion foreground map at each acquisition moment of the current acquisition cycle;
二维投影子模块,用于将当前选取的三维融合前景图中的每个三维融合前景点投影至一二维平面中,得到融合前景投影点;将当前选取的三维融合前景图中的全部三维融合前景点对应的融合前景投影点组合在一起,得到融合前景投影图;The two-dimensional projection sub-module is used to project each three-dimensional fusion foreground point in the currently selected three-dimensional fusion foreground image into a two-dimensional plane to obtain the fusion foreground projection point; all three-dimensional fusion foreground points in the currently selected three-dimensional fusion foreground image The fusion foreground projection points corresponding to the fusion foreground points are combined to obtain the fusion foreground projection map;
二维子空间划分子模块,用于将所述融合前景投影图所在的二维空间划分为若干二维子空间,统计以下各项信息中的一种或多种:每个二维子空间中包含的融合前景投影点的数量;每个二维子空间中包含的所有融合前景投影点呈现的最多的颜色信息;每个二维子空间中包含的所有融合前景投影点的最大高度;The two-dimensional subspace division sub-module is used to divide the two-dimensional space where the fused foreground projection image is located into several two-dimensional subspaces, and count one or more of the following information: in each two-dimensional subspace The number of fused foreground projection points included; the maximum color information presented by all fused foreground projection points contained in each 2D subspace; the maximum height of all fused foreground projection points contained in each 2D subspace;
二维聚类处理子模块,用于根据统计结果,将满足聚类条件的二维子空间中包含的所有融合前景投影点聚集形成二维融合前景块;其中,所述聚类条件为空间距离小于第一预设阈值,且统计数据的差值小于第二预设阈值;The two-dimensional clustering processing sub-module is used to gather all the fusion foreground projection points contained in the two-dimensional subspace satisfying the clustering condition to form a two-dimensional fusion foreground block according to the statistical results; wherein, the clustering condition is a spatial distance less than the first preset threshold, and the difference of statistical data is less than the second preset threshold;
二维模板匹配子模块,用于在所述二维空间中对形成的二维融合前景块进行模板匹配,判断所述二维融合前景块是否为目标对象;A two-dimensional template matching submodule, configured to perform template matching on the formed two-dimensional fusion foreground block in the two-dimensional space, and determine whether the two-dimensional fusion foreground block is a target object;
二维位置及位移计算子模块,用于当确定所述二维融合前景块为目标对象时,根据真实世界中的点在当前采集周期的各个采集时刻的多个三维融合前景图中所对应位置的变化情况,确定当前采集周期中不同采集时刻的三维融合前景图像对应的融合前景投影图中,聚集形成所述目标对象的每个融合前景投影点及与其对应于真实世界中的相同点的其他融合前景投影点的二维坐标变化情况,并据此确定在当前采集周期的各个采集时刻所述目标对象的位置及位移情况;The two-dimensional position and displacement calculation sub-module is used for determining that the two-dimensional fusion foreground block is the target object, according to the corresponding positions of the points in the real world in the multiple three-dimensional fusion foreground images at each acquisition time of the current acquisition cycle Changes in the current acquisition cycle, determine the fusion foreground projection map corresponding to the three-dimensional fusion foreground image at different acquisition moments in the current acquisition cycle, gather and form each fusion foreground projection point of the target object and other points corresponding to the same point in the real world Fuse the change of the two-dimensional coordinates of the foreground projection point, and determine the position and displacement of the target object at each acquisition moment of the current acquisition cycle accordingly;
二维空间特征计算子模块,用于将在当前采集周期的各个采集时刻所述目标对象的位置确定为所述目标对象的空间分布特征;A two-dimensional spatial feature calculation submodule, configured to determine the position of the target object at each acquisition moment in the current acquisition cycle as the spatial distribution feature of the target object;
二维运行特征计算子模块,用于将在当前采集周期的各个采集时刻所述目标对象的位移情况确定为所述目标对象的运行分布特征。The two-dimensional operating feature calculation sub-module is used to determine the displacement of the target object at each acquisition moment in the current acquisition cycle as the operating distribution feature of the target object.
判断模块611进一步包括:模式分类处理子模块和模式分类判断子模块。The judging module 611 further includes: a pattern classification processing submodule and a pattern classification judging submodule.
模式分类处理子模块,用于对所述目标对象的空间分布特征和运行分布特征进行模式分类;A mode classification processing submodule, configured to perform mode classification on the spatial distribution characteristics and operation distribution characteristics of the target object;
模式分类判断子模块,用于根据模式分类的结果判断是否有目标情景出现。The pattern classification judging sub-module is used to judge whether there is a target scenario according to the result of the pattern classification.
摄像机划分模块601是将在所述监控现场装设位置相距小于一预设距离的任意两台摄像机组成一个摄像机组合。The camera division module 601 is to form a camera combination from any two cameras installed at the monitoring site with a distance less than a preset distance.
图像处理模块603对当前摄像机组合在当前采集周期中依次采集的多帧二维影像图进行图像处理时,这多帧二维影像图是由当前摄像机组合中的同一台摄像机所采集。When the image processing module 603 performs image processing on the multi-frame 2D image images sequentially collected by the current camera combination in the current acquisition cycle, the multi-frame 2D image images are collected by the same camera in the current camera combination.
在本发明提供的多摄像机协同监控装置中,摄像机采集的二维影像图可以是灰度图像或彩色图像。In the multi-camera cooperative monitoring device provided by the present invention, the two-dimensional image images collected by the cameras may be grayscale images or color images.
本发明提供的多摄像机协同监控装置与多摄像机协同监控方法基于相同的发明思想实现,其具体实施方式可参照前述对多摄像机协同监控方法的介绍,此处不再赘述。The multi-camera cooperative monitoring device and the multi-camera cooperative monitoring method provided by the present invention are realized based on the same inventive concept, and the specific implementation method can refer to the above-mentioned introduction to the multi-camera cooperative monitoring method, which will not be repeated here.
应当注意,尽管在上文详细描述中提及了多摄像机协同监控装置的若干单元或子单元,但是这种划分仅仅并非强制性的。实际上,根据本发明的实施方式,上文描述的两个或更多单元的特征和功能可以在一个单元中具体化。反之,上文描述的一个单元的特征和功能可以进一步划分为由多个单元来具体化。It should be noted that although several units or subunits of the multi-camera collaborative monitoring device are mentioned in the above detailed description, this division is merely not mandatory. Actually, according to the embodiment of the present invention, the features and functions of two or more units described above may be embodied in one unit. Conversely, the features and functions of one unit described above may be further divided to be embodied by a plurality of units.
虽然已经参考若干具体实施方式描述了本发明的精神和原理,但是应该理解,本发明并不限于所公开的具体实施方式,对各方面的划分也不意味着这些方面中的特征不能组合以进行受益,这种划分仅是为了表述的方便。本发明旨在涵盖所附权利要求的精神和范围内所包括的各种修改和等同布置。Although the spirit and principles of the invention have been described with reference to a number of specific embodiments, it should be understood that the invention is not limited to the specific embodiments disclosed, nor does division of aspects imply that features in these aspects cannot be combined to achieve optimal performance. Benefit, this division is only for the convenience of expression. The present invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the scope of the present invention. Protection scope, within the spirit and principles of the present invention, any modification, equivalent replacement, improvement, etc., shall be included in the protection scope of the present invention.
本领域技术人员还可以了解到本发明实施例列出的各种说明性逻辑块(illustrative logical block),单元,和步骤可以通过电子硬件、电脑软件,或两者的结合进行实现。为清楚展示硬件和软件的可替换性(interchangeability),上述的各种说明性部件(illustrative components),单元和步骤已经通用地描述了它们的功能。这样的功能是通过硬件还是软件来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本发明实施例保护的范围。Those skilled in the art can also understand that various illustrative logical blocks, units, and steps listed in the embodiments of the present invention can be implemented by electronic hardware, computer software, or a combination of both. To clearly demonstrate the interchangeability of hardware and software, the various illustrative components, units and steps above have generally described their functions. Whether such functions are implemented by hardware or software depends on the specific application and overall system design requirements. Those skilled in the art may use various methods to implement the described functions for each specific application, but such implementation should not be understood as exceeding the protection scope of the embodiments of the present invention.
本发明实施例中所描述的各种说明性的逻辑块,或单元,或装置都可以通过通用处理器,数字信号处理器,专用集成电路(ASIC),现场可编程门阵列或其它可编程逻辑装置,离散门或晶体管逻辑,离散硬件部件,或上述任何组合的设计来实现或操作所描述的功能。通用处理器可以为微处理器,可选地,该通用处理器也可以为任何传统的处理器、控制器、微控制器或状态机。处理器也可以通过计算装置的组合来实现,例如数字信号处理器和微处理器,多个微处理器,一个或多个微处理器联合一个数字信号处理器核,或任何其它类似的配置来实现。Various illustrative logic blocks, or units, or devices described in the embodiments of the present invention can be implemented by a general-purpose processor, a digital signal processor, an application-specific integrated circuit (ASIC), a field programmable gate array or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to implement or operate the described functions. The general-purpose processor may be a microprocessor, and optionally, the general-purpose processor may also be any conventional processor, controller, microcontroller or state machine. A processor may also be implemented by a combination of computing devices, such as a digital signal processor and a microprocessor, multiple microprocessors, one or more microprocessors combined with a digital signal processor core, or any other similar configuration to accomplish.
本发明实施例中所描述的方法或算法的步骤可以直接嵌入硬件、处理器执行的软件模块、或者这两者的结合。软件模块可以存储于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动磁盘、CD-ROM或本领域中其它任意形式的存储媒介中。示例性地,存储媒介可以与处理器连接,以使得处理器可以从存储媒介中读取信息,并可以向存储媒介存写信息。可选地,存储媒介还可以集成到处理器中。处理器和存储媒介可以设置于ASIC中,ASIC可以设置于用户终端中。可选地,处理器和存储媒介也可以设置于用户终端中的不同的部件中。The steps of the method or algorithm described in the embodiments of the present invention may be directly embedded in hardware, a software module executed by a processor, or a combination of both. The software modules may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other storage medium in the art. Exemplarily, the storage medium can be connected to the processor, so that the processor can read information from the storage medium, and can write information to the storage medium. Optionally, the storage medium can also be integrated into the processor. The processor and the storage medium can be set in the ASIC, and the ASIC can be set in the user terminal. Optionally, the processor and the storage medium may also be set in different components in the user terminal.
在一个或多个示例性的设计中,本发明实施例所描述的上述功能可以在硬件、软件、固件或这三者的任意组合来实现。如果在软件中实现,这些功能可以存储与电脑可读的媒介上,或以一个或多个指令或代码形式传输于电脑可读的媒介上。电脑可读媒介包括电脑存储媒介和便于使得让电脑程序从一个地方转移到其它地方的通信媒介。存储媒介可以是任何通用或特殊电脑可以接入访问的可用媒体。例如,这样的电脑可读媒体可以包括但不限于RAM、ROM、EEPROM、CD-ROM或其它光盘存储、磁盘存储或其它磁性存储装置,或其它任何可以用于承载或存储以指令或数据结构和其它可被通用或特殊电脑、或通用或特殊处理器读取形式的程序代码的媒介。此外,任何连接都可以被适当地定义为电脑可读媒介,例如,如果软件是从一个网站站点、服务器或其它远程资源通过一个同轴电缆、光纤电缆、双绞线、数字用户线(DSL)或以例如红外、无线和微波等无线方式传输的也被包含在所定义的电脑可读媒介中。所述的碟片(disk)和磁盘(disc)包括压缩磁盘、镭射盘、光盘、DVD、软盘和蓝光光盘,磁盘通常以磁性复制数据,而碟片通常以激光进行光学复制数据。上述的组合也可以包含在电脑可读媒介中。In one or more exemplary designs, the above functions described in the embodiments of the present invention may be implemented in hardware, software, firmware or any combination of the three. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special computer. For example, such computer-readable media may include, but are not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other device that can be used to carry or store instructions or data structures and Other medium of program code in a form readable by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. In addition, any connection is properly defined as a computer-readable medium, for example, if the software is transmitted from a website site, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL) Or transmitted by wireless means such as infrared, wireless and microwave are also included in the definition of computer readable media. Disks and discs include compact discs, laser discs, optical discs, DVDs, floppy discs, and Blu-ray discs. Disks usually reproduce data magnetically, while discs usually reproduce data optically with lasers. Combinations of the above can also be contained on a computer readable medium.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610280010.4A CN105979203B (en) | 2016-04-29 | 2016-04-29 | A kind of multi-camera cooperative monitoring method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610280010.4A CN105979203B (en) | 2016-04-29 | 2016-04-29 | A kind of multi-camera cooperative monitoring method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105979203A true CN105979203A (en) | 2016-09-28 |
| CN105979203B CN105979203B (en) | 2019-04-23 |
Family
ID=56993443
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610280010.4A Expired - Fee Related CN105979203B (en) | 2016-04-29 | 2016-04-29 | A kind of multi-camera cooperative monitoring method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105979203B (en) |
Cited By (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106683173A (en) * | 2016-12-22 | 2017-05-17 | 西安电子科技大学 | A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching |
| CN106846284A (en) * | 2016-12-28 | 2017-06-13 | 武汉理工大学 | Active-mode intelligent sensing device and method based on cell |
| CN107197200A (en) * | 2017-05-22 | 2017-09-22 | 北斗羲和城市空间科技(北京)有限公司 | It is a kind of to realize the method and device that monitor video is shown |
| CN107544391A (en) * | 2017-09-25 | 2018-01-05 | 南京律智诚专利技术开发有限公司 | Oil well surveillance equipment timesharing management system |
| CN107613288A (en) * | 2017-09-25 | 2018-01-19 | 北京世纪东方通讯设备有限公司 | A kind of group technology and system for diagnosing multiple paths of video images quality |
| CN108229411A (en) * | 2018-01-15 | 2018-06-29 | 上海交通大学 | Human body hand-held knife behavioral value system and method based on RGB color image |
| CN108898628A (en) * | 2018-06-21 | 2018-11-27 | 北京纵目安驰智能科技有限公司 | Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular |
| CN109035658A (en) * | 2018-08-21 | 2018-12-18 | 北京深瞐科技有限公司 | A kind of historical relic safety protecting method and device |
| CN109247915A (en) * | 2018-08-30 | 2019-01-22 | 北京连心医疗科技有限公司 | A kind of the detection labeling and real-time detection method of skin surface deformation |
| CN109559342A (en) * | 2018-03-05 | 2019-04-02 | 北京佳格天地科技有限公司 | The long measurement method of animal body and device |
| CN109785429A (en) * | 2019-01-25 | 2019-05-21 | 北京极智无限科技有限公司 | A kind of method and apparatus of three-dimensional reconstruction |
| CN110544273A (en) * | 2018-05-29 | 2019-12-06 | 杭州海康机器人技术有限公司 | motion capture method, device and system |
| CN110636248A (en) * | 2018-06-22 | 2019-12-31 | 华为技术有限公司 | Target Tracking Method and Device |
| WO2020088739A1 (en) | 2018-10-29 | 2020-05-07 | Hexagon Technology Center Gmbh | Facility surveillance systems and methods |
| CN111127436A (en) * | 2019-12-25 | 2020-05-08 | 北京深测科技有限公司 | Displacement detection early warning method for bridge |
| CN112386248A (en) * | 2019-08-13 | 2021-02-23 | 中国移动通信有限公司研究院 | Method, device and equipment for detecting human body falling and computer readable storage medium |
| CN112669553A (en) * | 2019-10-15 | 2021-04-16 | 四川省数字商企智能科技有限公司 | Unattended system and method for oil and gas station |
| CN112669554A (en) * | 2019-10-15 | 2021-04-16 | 四川省数字商企智能科技有限公司 | Illegal intrusion detection and driving system and method for oil and gas station |
| CN112735011A (en) * | 2019-10-15 | 2021-04-30 | 四川省数字商企智能科技有限公司 | Identification system and method for oil and gas station |
| CN113538584A (en) * | 2021-09-16 | 2021-10-22 | 北京创米智汇物联科技有限公司 | Camera auto-negotiation monitoring processing method and system and camera |
| CN113891048A (en) * | 2021-10-28 | 2022-01-04 | 江苏濠汉信息技术有限公司 | Over-sight distance image transmission system for rail locomotive |
| CN114387346A (en) * | 2022-03-25 | 2022-04-22 | 阿里巴巴达摩院(杭州)科技有限公司 | Image recognition and prediction model processing method, three-dimensional modeling method and device |
| CN114596239A (en) * | 2020-11-19 | 2022-06-07 | 顺丰科技有限公司 | Loading and unloading event detection method, device, computer equipment and storage medium |
| CN115424105A (en) * | 2022-08-31 | 2022-12-02 | 重庆长安汽车股份有限公司 | Method, device, system, computer equipment and medium for peripheral vision moving object fusion |
| CN115690914A (en) * | 2022-11-08 | 2023-02-03 | 中国移动通信集团四川有限公司 | Abnormal behavior reminder method, device, electronic equipment and storage medium |
| CN116612594A (en) * | 2023-05-11 | 2023-08-18 | 深圳市云之音科技有限公司 | Intelligent monitoring and outbound system and method based on big data |
| CN116866522A (en) * | 2023-07-11 | 2023-10-10 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera |
| CN101794481A (en) * | 2009-02-04 | 2010-08-04 | 深圳市先进智能技术研究所 | ATM (Automatic teller machine) self-service bank monitoring system and method |
| CN103716579A (en) * | 2012-09-28 | 2014-04-09 | 中国科学院深圳先进技术研究院 | Video monitoring method and system |
| US20160005228A1 (en) * | 2013-05-01 | 2016-01-07 | Legend3D, Inc. | Method of converting 2d video to 3d video using 3d object models |
| CN105374019A (en) * | 2015-09-30 | 2016-03-02 | 华为技术有限公司 | A multi-depth image fusion method and device |
-
2016
- 2016-04-29 CN CN201610280010.4A patent/CN105979203B/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera |
| CN101794481A (en) * | 2009-02-04 | 2010-08-04 | 深圳市先进智能技术研究所 | ATM (Automatic teller machine) self-service bank monitoring system and method |
| CN103716579A (en) * | 2012-09-28 | 2014-04-09 | 中国科学院深圳先进技术研究院 | Video monitoring method and system |
| US20160005228A1 (en) * | 2013-05-01 | 2016-01-07 | Legend3D, Inc. | Method of converting 2d video to 3d video using 3d object models |
| CN105374019A (en) * | 2015-09-30 | 2016-03-02 | 华为技术有限公司 | A multi-depth image fusion method and device |
Cited By (43)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106683173A (en) * | 2016-12-22 | 2017-05-17 | 西安电子科技大学 | A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching |
| CN106683173B (en) * | 2016-12-22 | 2019-09-13 | 西安电子科技大学 | A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching |
| CN106846284A (en) * | 2016-12-28 | 2017-06-13 | 武汉理工大学 | Active-mode intelligent sensing device and method based on cell |
| CN107197200A (en) * | 2017-05-22 | 2017-09-22 | 北斗羲和城市空间科技(北京)有限公司 | It is a kind of to realize the method and device that monitor video is shown |
| CN107544391A (en) * | 2017-09-25 | 2018-01-05 | 南京律智诚专利技术开发有限公司 | Oil well surveillance equipment timesharing management system |
| CN107613288A (en) * | 2017-09-25 | 2018-01-19 | 北京世纪东方通讯设备有限公司 | A kind of group technology and system for diagnosing multiple paths of video images quality |
| CN107613288B (en) * | 2017-09-25 | 2019-06-25 | 北京世纪东方通讯设备有限公司 | A kind of group technology and system diagnosing multiple paths of video images quality |
| CN108229411A (en) * | 2018-01-15 | 2018-06-29 | 上海交通大学 | Human body hand-held knife behavioral value system and method based on RGB color image |
| CN109559342A (en) * | 2018-03-05 | 2019-04-02 | 北京佳格天地科技有限公司 | The long measurement method of animal body and device |
| CN109559342B (en) * | 2018-03-05 | 2024-02-09 | 北京佳格天地科技有限公司 | Method and device for measuring animal body length |
| CN110544273A (en) * | 2018-05-29 | 2019-12-06 | 杭州海康机器人技术有限公司 | motion capture method, device and system |
| CN108898628A (en) * | 2018-06-21 | 2018-11-27 | 北京纵目安驰智能科技有限公司 | Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular |
| CN110636248A (en) * | 2018-06-22 | 2019-12-31 | 华为技术有限公司 | Target Tracking Method and Device |
| CN110636248B (en) * | 2018-06-22 | 2021-08-27 | 华为技术有限公司 | Target tracking method and device |
| CN109035658A (en) * | 2018-08-21 | 2018-12-18 | 北京深瞐科技有限公司 | A kind of historical relic safety protecting method and device |
| CN109247915A (en) * | 2018-08-30 | 2019-01-22 | 北京连心医疗科技有限公司 | A kind of the detection labeling and real-time detection method of skin surface deformation |
| CN109247915B (en) * | 2018-08-30 | 2022-02-18 | 北京连心医疗科技有限公司 | Detection label for skin surface deformation and real-time detection method |
| EP3989194A1 (en) | 2018-10-29 | 2022-04-27 | Hexagon Technology Center GmbH | Facility surveillance systems and methods |
| US12254752B2 (en) | 2018-10-29 | 2025-03-18 | Hexagon Technology Center Gmbh | Facility surveillance systems and methods |
| US12175844B2 (en) | 2018-10-29 | 2024-12-24 | Hexagon Technology Center Gmbh | Facility surveillance systems and methods |
| WO2020088739A1 (en) | 2018-10-29 | 2020-05-07 | Hexagon Technology Center Gmbh | Facility surveillance systems and methods |
| US12380783B2 (en) | 2018-10-29 | 2025-08-05 | Hexagon Technology Center Gmbh | Facility surveillance systems and methods |
| EP3996058A1 (en) | 2018-10-29 | 2022-05-11 | Hexagon Technology Center GmbH | Facility surveillance systems and methods |
| CN109785429A (en) * | 2019-01-25 | 2019-05-21 | 北京极智无限科技有限公司 | A kind of method and apparatus of three-dimensional reconstruction |
| WO2020151078A1 (en) * | 2019-01-25 | 2020-07-30 | 北京极智无限科技有限公司 | Three-dimensional reconstruction method and apparatus |
| US11954832B2 (en) | 2019-01-25 | 2024-04-09 | Beijing Ainfinit Technology Co., Ltd | Three-dimensional reconstruction method and apparatus |
| CN112386248A (en) * | 2019-08-13 | 2021-02-23 | 中国移动通信有限公司研究院 | Method, device and equipment for detecting human body falling and computer readable storage medium |
| CN112386248B (en) * | 2019-08-13 | 2024-01-23 | 中国移动通信有限公司研究院 | Human body falling detection method, device, equipment and computer readable storage medium |
| CN112669554A (en) * | 2019-10-15 | 2021-04-16 | 四川省数字商企智能科技有限公司 | Illegal intrusion detection and driving system and method for oil and gas station |
| CN112669553A (en) * | 2019-10-15 | 2021-04-16 | 四川省数字商企智能科技有限公司 | Unattended system and method for oil and gas station |
| CN112735011A (en) * | 2019-10-15 | 2021-04-30 | 四川省数字商企智能科技有限公司 | Identification system and method for oil and gas station |
| CN111127436B (en) * | 2019-12-25 | 2023-10-20 | 北京深测科技有限公司 | Displacement detection early warning method for bridge |
| CN111127436A (en) * | 2019-12-25 | 2020-05-08 | 北京深测科技有限公司 | Displacement detection early warning method for bridge |
| CN114596239A (en) * | 2020-11-19 | 2022-06-07 | 顺丰科技有限公司 | Loading and unloading event detection method, device, computer equipment and storage medium |
| CN113538584A (en) * | 2021-09-16 | 2021-10-22 | 北京创米智汇物联科技有限公司 | Camera auto-negotiation monitoring processing method and system and camera |
| CN113538584B (en) * | 2021-09-16 | 2021-11-26 | 北京创米智汇物联科技有限公司 | Camera auto-negotiation monitoring processing method and system and camera |
| CN113891048A (en) * | 2021-10-28 | 2022-01-04 | 江苏濠汉信息技术有限公司 | Over-sight distance image transmission system for rail locomotive |
| CN114387346A (en) * | 2022-03-25 | 2022-04-22 | 阿里巴巴达摩院(杭州)科技有限公司 | Image recognition and prediction model processing method, three-dimensional modeling method and device |
| CN115424105A (en) * | 2022-08-31 | 2022-12-02 | 重庆长安汽车股份有限公司 | Method, device, system, computer equipment and medium for peripheral vision moving object fusion |
| CN115690914A (en) * | 2022-11-08 | 2023-02-03 | 中国移动通信集团四川有限公司 | Abnormal behavior reminder method, device, electronic equipment and storage medium |
| CN116612594A (en) * | 2023-05-11 | 2023-08-18 | 深圳市云之音科技有限公司 | Intelligent monitoring and outbound system and method based on big data |
| CN116866522A (en) * | 2023-07-11 | 2023-10-10 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
| CN116866522B (en) * | 2023-07-11 | 2024-05-17 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105979203B (en) | 2019-04-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105979203A (en) | Multi-camera cooperative monitoring method and device | |
| Wang et al. | Multi-sensor fusion technology for 3D object detection in autonomous driving: A review | |
| RU2635066C2 (en) | Method of detecting human objects in video (versions) | |
| CN106296721B (en) | Object aggregation detection method and device based on stereoscopic vision | |
| Balali et al. | Multi-class US traffic signs 3D recognition and localization via image-based point cloud model using color candidate extraction and texture-based recognition | |
| CN114359744A (en) | Depth estimation method based on fusion of laser radar and event camera | |
| CN104715471B (en) | Target locating method and its device | |
| CN111753609A (en) | Method, device and camera for target recognition | |
| CN111880191A (en) | Map generation method based on multi-agent laser radar and visual information fusion | |
| CN102521842B (en) | Method and device for detecting fast movement | |
| CN115797408A (en) | Target tracking method and device fusing multi-view image and three-dimensional point cloud | |
| CN114494292A (en) | Method and system for extracting building facade glass area | |
| CN119323777B (en) | Automatic obstacle avoidance system of automobile based on real-time 3D target detection | |
| CN103729620B (en) | A kind of multi-view pedestrian detection method based on multi-view Bayesian network | |
| CN103700106A (en) | Distributed-camera-based multi-view moving object counting and positioning method | |
| Shalaby et al. | Algorithms and applications of structure from motion (SFM): A survey | |
| Tian et al. | Ucdnet: Multi-uav collaborative 3-d object detection network by reliable feature mapping | |
| CN117576653A (en) | Target tracking methods, devices, computer equipment and storage media | |
| CN119478858A (en) | A radar and video information fusion coding method for vehicle overload warning | |
| CN113160299B (en) | Vehicle video speed measurement method based on Kalman filtering and computer readable storage medium | |
| CN118675023A (en) | Vehicle-road cooperative sensing method, device, equipment, storage medium and program product | |
| CN113673569B (en) | Target detection method, device, electronic device, and storage medium | |
| CN112084854B (en) | Obstacle detection method, obstacle detection device and robot | |
| CN116912877B (en) | Method and system for monitoring space-time contact behavior sequence of urban public space crowd | |
| Gao et al. | Online building segmentation from ground-based LiDAR data in urban scenes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190423 Termination date: 20200429 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |