[go: up one dir, main page]

CN103325121A - Method and system for estimating network topological relations of cameras in monitoring scenes - Google Patents

Method and system for estimating network topological relations of cameras in monitoring scenes Download PDF

Info

Publication number
CN103325121A
CN103325121A CN2013102703492A CN201310270349A CN103325121A CN 103325121 A CN103325121 A CN 103325121A CN 2013102703492 A CN2013102703492 A CN 2013102703492A CN 201310270349 A CN201310270349 A CN 201310270349A CN 103325121 A CN103325121 A CN 103325121A
Authority
CN
China
Prior art keywords
grid
optical flow
monitoring scene
camera
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102703492A
Other languages
Chinese (zh)
Other versions
CN103325121B (en
Inventor
张红广
崔建竹
唐潮
田飞
王鹏
邓娜娜
蒋建彬
马娜
高会武
徐尚鹏
季益华
马铁
宋成国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SMART CITY INFORMATION TECHNOLOGY Co Ltd
Shanghai Advanced Research Institute of CAS
China Security and Surveillance Technology PRC Inc
Original Assignee
SMART CITY INFORMATION TECHNOLOGY Co Ltd
Shanghai Advanced Research Institute of CAS
China Security and Surveillance Technology PRC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SMART CITY INFORMATION TECHNOLOGY Co Ltd, Shanghai Advanced Research Institute of CAS, China Security and Surveillance Technology PRC Inc filed Critical SMART CITY INFORMATION TECHNOLOGY Co Ltd
Priority to CN201310270349.2A priority Critical patent/CN103325121B/en
Publication of CN103325121A publication Critical patent/CN103325121A/en
Application granted granted Critical
Publication of CN103325121B publication Critical patent/CN103325121B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明适用于安防技术领域,提供了一种监控场景中摄像机网络拓扑关系估算方法及系统,所述方法包括:将监控网络中拍摄到的视频流中的监控场景分解为网格;获取所述监控场景中各网格的光流的颜色直方图信息;根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果;根据每个监控场景的语义区域分割结果确定各台摄像机之间的网络拓扑关系。本发明解决了现有技术存在的,计算摄像机之间的拓扑关系都基于对具体目标活动的定位、追踪,当监控环境中存在遮挡或监控图像分辨率较低时,算法性能将急剧下降的问题。

Figure 201310270349

The present invention is applicable to the field of security technology, and provides a method and system for estimating the topological relationship of a camera network in a monitoring scene. The method includes: decomposing the monitoring scene in the video stream captured in the monitoring network into grids; obtaining the The color histogram information of the optical flow of each grid in the monitoring scene; according to the color histogram information of the optical flow of each grid in the monitoring scene, the grids in the monitoring scene are clustered to obtain the monitoring scene The semantic region segmentation results of each monitoring scene; determine the network topology relationship between each camera according to the semantic region segmentation results of each monitoring scene. The present invention solves the problem existing in the prior art that the calculation of the topological relationship between cameras is based on the positioning and tracking of specific target activities, and the algorithm performance will drop sharply when there is occlusion in the monitoring environment or the resolution of the monitoring image is low .

Figure 201310270349

Description

一种监控场景中摄像机网络拓扑关系估算方法及系统Method and system for estimating topology relationship of camera network in surveillance scene

技术领域technical field

本发明属于安防技术领域,尤其涉及一种监控场景中摄像机网络拓扑关系估算方法及系统。The invention belongs to the field of security technology, and in particular relates to a method and system for estimating the topological relationship of a camera network in a monitoring scene.

背景技术Background technique

摄像机网络的拓扑估计是摄像机网络部署的一个关键问题,精确的拓扑估计不仅能够掌握监控区域内的人员、人群等目标的运动模式,也能够通过反馈,进一步优化部署。The topology estimation of the camera network is a key issue in the deployment of the camera network. Accurate topology estimation can not only grasp the movement patterns of people and crowds in the monitoring area, but also can further optimize the deployment through feedback.

现有技术提供了一些方法来进行摄像机网络的拓扑估计,包括:Existing technologies provide some methods for topology estimation of camera networks, including:

一、基于图像背景剔除的人员检测和跟踪结果,获取多个摄像机间人群活动的失控关联性,为分析和建立整个场景的目标活动模式提供依据。1. Based on the results of people detection and tracking based on image background elimination, the out-of-control correlation of crowd activities among multiple cameras can be obtained to provide a basis for analyzing and establishing the target activity pattern of the entire scene.

二、利用多个摄像机扑捉的人员步伐信息,获取人员活动的一般模式,并根据此模式重新调整摄像机部署,实现以更好的视角和更少的摄像机数目到达监控目标。2. Use the personnel pace information captured by multiple cameras to obtain the general mode of personnel activities, and readjust the camera deployment according to this mode, so as to reach the monitoring target with a better viewing angle and fewer cameras.

三、基于Parzen窗和高斯核的混合概率密度估计器,来估计由时间间隔、进出观测视域的位置和进出视域时的运动速度等量组成的概率密度函数,整个估计过程通过学习训练集数据的方法实现。3. A hybrid probability density estimator based on the Parzen window and Gaussian kernel to estimate the probability density function composed of the time interval, the position of entering and exiting the observation field, and the movement speed when entering and exiting the field of view. The entire estimation process is passed through the learning training set Data method implementation.

四、在时域约束方面采用一种模糊时间间隔来表示已观测目标在下一摄像机中出现的可能性,这种可能性通过运动方程估计得到。Fourth, in terms of time domain constraints, a fuzzy time interval is used to represent the possibility of the observed target appearing in the next camera, and this possibility is estimated by the motion equation.

五、利用大量的目标观测数据,通过无监督学习的方法,为一个多摄像机监控网络自动地建立起摄像机之间的时空域拓扑关系。在此基础上,他们还给出了验证算法性能的方法并实现了目标在该网络中的跟踪。5. Using a large amount of target observation data, through the method of unsupervised learning, the spatio-temporal topological relationship between cameras is automatically established for a multi-camera surveillance network. On this basis, they also gave a method to verify the performance of the algorithm and realized the tracking of the target in the network.

六、利用一个关于统计信任的更一般的信息论思想,把不确定对应性和贝叶斯方法相结合,减少了假设条件并体现了较好的性能。Sixth, using a more general information theory idea about statistical trust, combining uncertainty correspondence and Bayesian method, reducing assumptions and showing better performance.

七、假设所有的摄像机都存在潜在的连接关系,然后通过观测把不可能的连接去掉,实验证明他们的方法在学习大规模摄像机网络拓扑关系方面,尤其是在学习样本较少的情况下,具有较好的效率和效果。7. Assuming that all cameras have potential connections, and then remove the impossible connections through observations, the experiments prove that their method is effective in learning the topological relationship of large-scale camera networks, especially in the case of fewer learning samples. Better efficiency and effectiveness.

八、大量工作利用多摄像机的拓扑关系,进行全局活动分析和行人再识别。8. A lot of work utilizes the topological relationship of multiple cameras for global activity analysis and pedestrian re-identification.

但是,以上拓扑推理算法基本都基于对具体目标活动的定位、追踪,对监控视频质量要求较高,当监控环境中存在遮挡或监控图像分辨率较低时,算法性能将急剧下降。However, the above topology inference algorithms are basically based on the positioning and tracking of specific target activities, and have high requirements for the quality of surveillance video. When there is occlusion in the surveillance environment or the resolution of the surveillance image is low, the performance of the algorithm will drop sharply.

发明内容Contents of the invention

本发明实施例的目的在于提供一种监控场景中摄像机网络拓扑关系估算方法及系统,以解决现有技术存在的,现有的拓扑推理算法基本都基于对具体目标活动的定位、追踪,对监控视频质量要求较高,当监控环境中存在遮挡或监控图像分辨率较低时,算法性能将急剧下降的问题。The purpose of the embodiment of the present invention is to provide a method and system for estimating the topological relationship of the camera network in the monitoring scene, so as to solve the problems existing in the prior art. The existing topology inference algorithms are basically based on the positioning and tracking of specific target activities. The video quality requirements are high. When there is occlusion in the monitoring environment or the resolution of the monitoring image is low, the performance of the algorithm will drop sharply.

本发明的实施例是这样实现的,一种监控场景中摄像机网络拓扑关系估算方法,所述方法包括以下步骤:The embodiment of the present invention is achieved in this way, a method for estimating the topology relationship of a camera network in a surveillance scene, the method comprising the following steps:

将监控网络中每台摄像机拍摄到的视频流中的监控场景分解为网格;Decompose the monitoring scene in the video stream captured by each camera in the monitoring network into a grid;

针对每个监控场景,获取所述监控场景中各网格的光流的颜色直方图信息;For each monitoring scene, obtain the color histogram information of the optical flow of each grid in the monitoring scene;

针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果;For each monitoring scene, cluster the grids in the monitoring scene according to the color histogram information of the optical flow of each grid in the monitoring scene, and obtain the semantic region segmentation result of the monitoring scene;

根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系。According to the semantic region segmentation results of each monitoring scene, the network topology relationship among the cameras in the monitoring network is determined.

本发明的另一实施例的目的在于提供一种监控场景中摄像机网络拓扑关系估算系统,所述系统包括:The purpose of another embodiment of the present invention is to provide a camera network topology relationship estimation system in a surveillance scene, the system comprising:

分解单元,用于将监控网络中每台摄像机拍摄到的视频流中的监控场景分解为网格;A decomposition unit, configured to decompose the monitoring scene in the video stream captured by each camera in the monitoring network into grids;

获取单元,用于针对每个监控场景,获取所述监控场景中各网格的光流的颜色直方图信息;An acquisition unit, for each monitoring scene, to obtain the color histogram information of the optical flow of each grid in the monitoring scene;

聚类单元,用于针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果;A clustering unit, for each monitoring scene, clustering the grids in the monitoring scene according to the color histogram information of the optical flow of each grid in the monitoring scene, to obtain the semantic area of the monitoring scene segmentation result;

确定单元,用于根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系。The determining unit is configured to determine the network topology relationship among the cameras in the monitoring network according to the semantic region segmentation result of each monitoring scene.

本发明实施例通过光流算法计算网格的光流的颜色直方图特征,进一步计算出摄像机之间的拓扑关系,并不需要非常清晰获取运动目标的定位或者进行追踪,解决了现有技术存在的,计算摄像机之间的拓扑关系都基于对具体目标活动的定位、追踪,对监控视频质量要求较高,当监控环境中存在遮挡或监控图像分辨率较低时,算法性能将急剧下降的问题。The embodiment of the present invention calculates the color histogram feature of the optical flow of the grid through the optical flow algorithm, and further calculates the topological relationship between the cameras, and does not need to obtain the positioning or tracking of the moving target very clearly, which solves the problems in the prior art. Yes, the calculation of the topological relationship between cameras is based on the positioning and tracking of specific target activities, and the quality of the surveillance video is high. When there is occlusion in the surveillance environment or the resolution of the surveillance image is low, the performance of the algorithm will drop sharply. .

附图说明Description of drawings

为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the descriptions of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only of the present invention. For some embodiments, those of ordinary skill in the art can also obtain other drawings based on these drawings without any creative effort.

图1是本发明一实施例提供的监控场景中摄像机网络拓扑关系估算方法的实现流程图;Fig. 1 is the implementation flowchart of the method for estimating the topology relationship of the camera network in the monitoring scene provided by an embodiment of the present invention;

图2是本发明另一实施例提供的摄像机拓扑关系估计结果图;FIG. 2 is a diagram of camera topology estimation results provided by another embodiment of the present invention;

图3是本发明另一实施例提供的楼层及摄像机部署简图;Fig. 3 is a floor and camera deployment diagram provided by another embodiment of the present invention;

图4是本发明另一实施例提供的监控场景中摄像机网络拓扑关系估算系统的模块结构图。Fig. 4 is a block diagram of a system for estimating the topology relationship of a camera network in a surveillance scene according to another embodiment of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

本发明一实施例提供了监控场景中摄像机网络拓扑关系估算方法,所述方法如图1所示,具体步骤包括:An embodiment of the present invention provides a method for estimating the topology relationship of a camera network in a monitoring scene. The method is shown in FIG. 1, and the specific steps include:

在步骤S101中,将监控网络中每台摄像机拍摄到的视频流中的监控场景分解为网格。In step S101 , the monitoring scene in the video stream captured by each camera in the monitoring network is decomposed into grids.

在本实施例中,监控网络中包括至少两台摄像机,每台摄像机都拍摄到视频流,视频流里面包括多个帧,每帧都是一幅图像,在多个帧中,有运动目标经过的图像就是监控场景。In this embodiment, at least two cameras are included in the monitoring network, and each camera captures a video stream. The video stream includes multiple frames, and each frame is an image. In multiple frames, a moving target passes by The image is the surveillance scene.

需要说明的是,网格大小一般为10*10,也可以预设,但是所有摄像机拍摄到的视频流中的监控场景分解后的网格大小一致。It should be noted that the grid size is generally 10*10, which can also be preset, but the decomposed grid size of the monitoring scene in the video stream captured by all cameras is the same.

在步骤S102中,针对每个监控场景,获取所述监控场景中各网格的光流的颜色直方图信息。In step S102, for each monitoring scene, the color histogram information of the optical flow of each grid in the monitoring scene is acquired.

具体地,实现获取所述监控场景中各网格的光流的颜色直方图信息的方法具体为:Specifically, the method for obtaining the color histogram information of the optical flow of each grid in the monitoring scene is specifically as follows:

定义摄像机拍摄到的视频流为In(X),其中X为所述视频流的监控场景中的网格的坐标,X=(x;y)T,所述x为所述网格的横坐标,所述y为所述网格的纵坐标,所述T表示矩阵的转置,所述n为所述视频流包含的视频帧的编号;Define the video stream captured by the camera as I n (X), where X is the coordinate of the grid in the monitoring scene of the video stream, X=(x; y) T , and the x is the horizontal axis of the grid Coordinates, the y is the ordinate of the grid, the T represents the transposition of the matrix, and the n is the number of the video frame included in the video stream;

定义 W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 ) definition W ( x ; p ) = ( 1 + p 1 ) × x + p 3 × the y + p 5 p 2 × x + ( 1 + p 4 ) × the y + p 6 ; - - - ( 1 )

其中,W表示可变性模板,p=(p1,p2,p3,p4,p5,p6)T,所述p1、p2、p3、p4为0,所述p5、p6为网格的光流信息;Wherein, W represents the variability template, p=(p1, p2, p3, p4, p5, p6) T , the p1, p2, p3, p4 are 0, and the p5, p6 are the optical flow information of the grid;

定义 p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 ) definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 )

其中,Δp表示两次迭代的p之差,T(x)表示视频流第一帧分解的网格;Among them, Δp represents the difference of p between two iterations, and T(x) represents the decomposed grid of the first frame of the video stream;

需要说明的是,视频流第一帧分解的网格是指视频流中第一帧图像分解后的网格。It should be noted that the decomposed grid of the first frame of the video stream refers to the decomposed grid of the first frame image in the video stream.

按照(3)、(4)、(5)进行迭代,直到满足Δp小于预设阈值ε;Iterate according to (3), (4), and (5) until Δp is less than the preset threshold ε;

▿▿ II == WW (( ΔxΔx ;; pp )) == IxIx ++ pp 55 IyIy ++ pp 66 ;; -- -- -- (( 33 ))

其中,所述Ix表示网格在x轴方向上的梯度图,所述Iy表示网格在y轴方向上的梯度图,所述▽I表示网格在经过可变性模板W(X;p)变换后的梯度图;Wherein, the Ix represents the gradient map of the grid on the x-axis direction, the Iy represents the gradient map of the grid on the y-axis direction, and the ▽I represents the grid passing through the variability template W(X; p) Transformed gradient map;

Hh == ΣΣ xx [[ ▿▿ II ]] TT [[ ▿▿ II ]] ;; -- -- -- (( 44 ))

ΔpΔp == Hh -- 11 ** ΣΣ xx [[ ▿▿ II ]] TT [[ TT (( xx )) -- II (( WW (( xx ;; pp )) )) ]] ;; -- -- -- (( 55 ))

计算出满足Δp小于预设阈值ε时的p5、p6;Calculate p5 and p6 when Δp is less than the preset threshold ε;

从网格的光流信息的RGB三个分量获取光流得到网格的颜色光流信息;Obtain the optical flow from the RGB three components of the optical flow information of the grid to obtain the color optical flow information of the grid;

根据所述网格的颜色光流信息,计算光流在8个方向的直方图信息,所述光流在8个方向的直方图信息是网格的光流的颜色直方图特征,所述网格的光流的颜色直方图特征包括水平方向上的光流u’b和垂直方向上的光流v’bAccording to the color optical flow information of the grid, calculate the histogram information of the optical flow in 8 directions, the histogram information of the optical flow in 8 directions is the color histogram feature of the optical flow of the grid, the network The color histogram features of the optical flow of the lattice include the optical flow u'b in the horizontal direction and the optical flow v'b in the vertical direction.

需要说明的是,所述8个方向具体为每隔45度一个方向。It should be noted that the eight directions are specifically one direction every 45 degrees.

在步骤S103中,针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果。In step S103, for each monitoring scene, cluster the grids in the monitoring scene according to the color histogram information of the optical flow of each grid in the monitoring scene, and obtain the semantic region segmentation of the monitoring scene result.

具体地,实现步骤S103具体为:Specifically, step S103 is implemented as follows:

uu nno == ΣΣ bb ∈∈ rr nno uu ′′ bb -- -- -- (( 66 ))

vv nno == ΣΣ bb ∈∈ rr nno vv ′′ bb -- -- -- (( 77 ))

在步骤S104中,根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系。In step S104, the network topology relationship among the cameras in the monitoring network is determined according to the semantic region segmentation result of each monitoring scene.

具体地,以两台摄像机为例,所述两台摄像机分别为第一摄像机和第二摄像机,所述第一和第二并不代表顺序,仅用于区分摄像机;第一摄像机获取第一视频流,第二摄像机获取第二视频流,视频流包括多个帧,每帧都是一幅图像,第一视频流包括第一图像,第二视频流包括第二图像;在第一图像中,有运动目标经过的图像为第一监控场景,所述运动目标包括人、动物或者其它实物,在第二图像中,有运动目标经过的图像为第二监控场景。Specifically, taking two cameras as an example, the two cameras are respectively the first camera and the second camera, and the first and second do not represent the order, but are only used to distinguish the cameras; the first camera acquires the first video stream, the second camera acquires a second video stream, the video stream includes a plurality of frames, each frame is an image, the first video stream includes the first image, and the second video stream includes the second image; in the first image, An image with a moving object passing by is the first monitoring scene, and the moving object includes people, animals or other real objects, and in the second image, an image with a moving object passing by is the second monitoring scene.

ρρ aa ii ,, aa jj (( ττ )) == EE. [[ aa ii cc ]] EE. [[ aa ii 22 ]] EE. [[ cc 22 ]] ;; -- -- -- (( 88 ))

ττ ^^ aa ii ,, aa jj == argarg maxmax ττ ΣΣ ρρ aa ii ,, aa jj (( ττ )) ΓΓ ;; -- -- -- (( 99 ))

ΨΨ ii ,, jj == ρρ aa ii ,, aa jj (( ττ )) (( 11 -- ττ ^^ aa ii ,, aa jj )) ;; -- -- -- (( 1010 ))

其中,所述ai表示第一网格的光流的颜色直方图特征,所述第一网格由第一监控场景分解,所述第一监控场景由第一摄像机拍摄,所述aj表示第二网格的光流的颜色直方图特征,所述第二网格由第二监控场景分解,所述第二监控场景由第二摄像机拍摄,所述第一摄像机和第二摄像机为监控网络中任意两台摄像机,所述c表示τ时刻后的第二网格,所述

Figure BDA00003434914300075
表示第一网格的光流的颜色直方图特征和第二网格的光流的颜色直方图特征的关联度,所述
Figure BDA00003434914300074
表示第一网格和第二网格的时移,所述Ψi,j表示第一摄像机和第二摄像机的拓扑关系估计结果。Wherein, the a i represents the color histogram feature of the optical flow of the first grid, the first grid is decomposed by the first monitoring scene, and the first monitoring scene is captured by the first camera, and the a j represents The color histogram feature of the optical flow of the second grid, the second grid is decomposed by the second monitoring scene, the second monitoring scene is taken by the second camera, and the first camera and the second camera are monitoring network Any two cameras in , the c represents the second grid after time τ, the
Figure BDA00003434914300075
Indicates the degree of correlation between the color histogram feature of the optical flow of the first grid and the color histogram feature of the optical flow of the second grid, the
Figure BDA00003434914300074
represents the time shift of the first grid and the second grid, and the Ψ i,j represents the topological relationship estimation results of the first camera and the second camera.

需要说明的是,Ψi,j大于0.5时,表示第一摄像机和第二摄像机是拓扑相关;在步骤S104中,需计算任意两台摄像机之间的拓扑关系估计结果。It should be noted that when Ψ i,j is greater than 0.5, it means that the first camera and the second camera are topologically related; in step S104 , it is necessary to calculate the topological relationship estimation result between any two cameras.

本发明另一实施例提供了摄像机拓扑关系估计结果如图2所示,摄像机拓扑关系估计结果如下:Another embodiment of the present invention provides a camera topology estimation result as shown in Figure 2. The camera topology estimation result is as follows:

本发明实施例选取中国科学院上海高等研究所行政楼一层7台摄像机,从中选取某日中午十一点到下午一点的视频流作为样本来计算摄像机拓扑关系估计结果。The embodiment of the present invention selects 7 cameras on the first floor of the administrative building of the Shanghai Institute for Advanced Study, Chinese Academy of Sciences, and selects video streams from 11:00 noon to 1:00 pm on a certain day as samples to calculate the camera topology estimation result.

实验中7个摄像机部署于同一楼层,其中①号和③号摄像机部署在电梯口,其余摄像机部署在5个通道出入口处。楼层及摄像机部署简图如图3所示。In the experiment, 7 cameras were deployed on the same floor, among which cameras ① and ③ were deployed at the elevator entrance, and the remaining cameras were deployed at the entrances and exits of the 5 passages. The floor and camera deployment diagram is shown in Figure 3.

图2中圆圈内数字表示摄像机编号,与图3摄像机编号对应,实验结果中两台摄像机间有实线相连表示此两台摄像机监控目标间存在关联,即同一目标出现在两台摄像机视野中,能从概率统计的角度反映目标活动趋势。摄像机间无实线相连表示此两台摄像机无关联或关联性很小。如①⑥⑦号摄像机间存在很强的关联,因⑦号摄像机所在位置为整个行政楼的正门,进入楼层后必定通过①号摄像机处的电梯上楼或通过⑥号摄像机处的通道进入一楼后端食堂。由于所选时间段为午餐时间,二楼以上很多人员需要通过①号摄像机处电梯到达一楼,再从⑥号摄像机处通道进入食堂,午餐后,人员会原路返回。②号摄像机和③号摄像机间不存在关联(或关联性很小),因③号摄像机处电梯为货梯,仅供食堂内部使用,两处摄像机间为食堂后台厨房,不存在直接的通路。The number in the circle in Figure 2 indicates the camera number, which corresponds to the camera number in Figure 3. In the experimental results, there is a solid line connecting the two cameras, indicating that there is a relationship between the monitoring targets of the two cameras, that is, the same target appears in the field of view of the two cameras. It can reflect the target activity trend from the perspective of probability and statistics. No solid line connecting the cameras means that the two cameras have no connection or little connection. For example, there is a strong connection between cameras ①, ⑥, and ⑦. Because the location of camera ⑦ is the main entrance of the entire administrative building, after entering the floor, you must go upstairs through the elevator at camera ① or enter the back end of the first floor through the passage at camera ⑥. canteen. Since the selected time period is lunch time, many people above the second floor need to go to the first floor through the elevator at camera ①, and then enter the cafeteria through the passage at camera ⑥. After lunch, the personnel will return the same way. There is no connection (or very little) between camera ② and camera ③, because the elevator at camera ③ is a freight elevator, which is only used inside the cafeteria, and the two cameras are the back kitchen of the cafeteria, and there is no direct passage.

本发明另一实施例提供了监控场景中摄像机网络拓扑关系估算系统,所述系统的模块结构如图4所示,具体包括:Another embodiment of the present invention provides a camera network topology relationship estimation system in a monitoring scene. The module structure of the system is shown in FIG. 4, specifically including:

分解单元41,用于将监控网络中每台摄像机拍摄到的视频流中的监控场景分解为网格;Decomposition unit 41 is used to decompose the monitoring scene in the video stream captured by each camera in the monitoring network into grids;

获取单元42,用于针对每个监控场景,获取所述监控场景中各网格的光流的颜色直方图信息;An acquisition unit 42, configured to acquire, for each monitoring scene, color histogram information of the optical flow of each grid in the monitoring scene;

聚类单元43,用于针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果;The clustering unit 43 is configured to, for each monitoring scene, cluster the grids in the monitoring scene according to the color histogram information of the optical flow of each grid in the monitoring scene, to obtain the semantics of the monitoring scene Region segmentation results;

确定单元44,用于根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系。The determining unit 44 is configured to determine the network topology relationship among the cameras in the monitoring network according to the semantic region segmentation result of each monitoring scene.

可选的,所述获取单元42具体用于:Optionally, the acquisition unit 42 is specifically configured to:

定义摄像机拍摄到的视频流为In(X),其中X为所述视频流的监控场景中的网格的坐标,X=(x;y)T,所述x为所述网格的横坐标,所述y为所述网格的纵坐标,所述T表示矩阵的转置,所述n为所述视频流包含的视频帧的编号;Define the video stream captured by the camera as I n (X), where X is the coordinate of the grid in the monitoring scene of the video stream, X=(x; y) T , and the x is the horizontal axis of the grid Coordinates, the y is the ordinate of the grid, the T represents the transposition of the matrix, and the n is the number of the video frame included in the video stream;

定义 W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 ) definition W ( x ; p ) = ( 1 + p 1 ) × x + p 3 × the y + p 5 p 2 × x + ( 1 + p 4 ) × the y + p 6 ; - - - ( 1 )

其中,W表示可变性模板,p=(p1,p2,p3,p4,p5,p6)T,所述p1、p2、p3、p4为0,所述p5、p6为网格的光流信息;Wherein, W represents the variability template, p=(p1, p2, p3, p4, p5, p6) T , the p1, p2, p3, p4 are 0, and the p5, p6 are the optical flow information of the grid;

定义 p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 ) definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 )

其中,Δp表示两次迭代的p之差,T(x)表示视频流第一帧分解的网格;Among them, Δp represents the difference of p between two iterations, and T(x) represents the decomposed grid of the first frame of the video stream;

按照(3)、(4)、(5)进行迭代,直到满足Δp小于预设阈值ε;Iterate according to (3), (4), and (5) until Δp is less than the preset threshold ε;

▿▿ II == WW (( ΔxΔx ;; pp )) == IxIx ++ pp 55 IyIy ++ pp 66 ;; -- -- -- (( 33 ))

其中,所述Ix表示网格在x轴方向上的梯度图,所述Iy表示网格在y轴方向上的梯度图,所述▽I表示网格在经过可变性模板W(X;p)变换后的梯度图;Wherein, the Ix represents the gradient map of the grid on the x-axis direction, the Iy represents the gradient map of the grid on the y-axis direction, and the ▽I represents the grid passing through the variability template W(X; p) Transformed gradient map;

Hh == ΣΣ xx [[ ▿▿ II ]] TT [[ ▿▿ II ]] ;; -- -- -- (( 44 ))

ΔpΔp == Hh -- 11 ** ΣΣ xx [[ ▿▿ II ]] TT [[ TT (( xx )) -- II (( WW (( xx ;; pp )) )) ]] ;; -- -- -- (( 55 ))

计算出满足Δp小于预设阈值ε时的p5、p6;Calculate p5 and p6 when Δp is less than the preset threshold ε;

从网格的光流信息的RGB三个分量获取光流得到网格的颜色光流信息;Obtain the optical flow from the RGB three components of the optical flow information of the grid to obtain the color optical flow information of the grid;

根据所述网格的颜色光流信息,计算光流在8个方向的直方图信息,所述光流在8个方向的直方图信息是网格的光流的颜色直方图特征,所述网格的光流的颜色直方图特征包括水平方向上的光流u’b和垂直方向上的光流v’bAccording to the color optical flow information of the grid, calculate the histogram information of the optical flow in 8 directions, the histogram information of the optical flow in 8 directions is the color histogram feature of the optical flow of the grid, the network The color histogram features of the optical flow of the lattice include the optical flow u' b in the horizontal direction and the optical flow v' b in the vertical direction.

可选的,所述聚类单元43具体用于:Optionally, the clustering unit 43 is specifically used for:

uu nno == ΣΣ bb ∈∈ rr nno uu ′′ bb ;; -- -- -- (( 66 ))

vv nno == ΣΣ bb ∈∈ rr nno vv ′′ bb .. -- -- -- (( 77 ))

可选的,所述确定单元44具体用于:Optionally, the determining unit 44 is specifically configured to:

ρρ aa ii ,, aa jj (( ττ )) == EE. [[ aa ii cc ]] EE. [[ aa ii 22 ]] EE. [[ cc 22 ]] ;; -- -- -- (( 88 ))

ττ ^^ aa ii ,, aa jj == argarg maxmax ττ ΣΣ ρρ aa ii ,, aa jj (( ττ )) ΓΓ ;; -- -- -- (( 99 ))

ΨΨ ii ,, jj == ρρ aa ii ,, aa jj (( ττ )) (( 11 -- ττ ^^ aa ii ,, aa jj )) ;; -- -- -- (( 1010 ))

其中,所述ai表示第一网格的光流的颜色直方图特征,所述第一网格由第一监控场景分解,所述第一监控场景由第一摄像机拍摄,所述aj表示第二网格的光流的颜色直方图特征,所述第二网格由第二监控场景分解,所述第二监控场景由第二摄像机拍摄,所述第一摄像机和所述第二摄像机为监控网络中的任意两台摄像机,所述c表示τ时刻后的第二网格,所述

Figure BDA00003434914300107
表示第一网格的光流的颜色直方图特征和第二网格的光流的颜色直方图特征的关联度,所述
Figure BDA00003434914300104
表示第一网格和第二网格的时移,所述Ψi,j表示第一摄像机和第二摄像机的拓扑关系估计结果。Wherein, the a i represents the color histogram feature of the optical flow of the first grid, the first grid is decomposed by the first monitoring scene, and the first monitoring scene is captured by the first camera, and the a j represents The color histogram feature of the optical flow of the second grid, the second grid is decomposed by the second monitoring scene, the second monitoring scene is taken by the second camera, the first camera and the second camera are Any two cameras in the monitoring network, the c represents the second grid after time τ, the
Figure BDA00003434914300107
Indicates the degree of correlation between the color histogram feature of the optical flow of the first grid and the color histogram feature of the optical flow of the second grid, the
Figure BDA00003434914300104
represents the time shift of the first grid and the second grid, and the Ψ i,j represents the topological relationship estimation results of the first camera and the second camera.

可选的,所述8个方向具体为每隔45度一个方向。Optionally, the eight directions are specifically one direction every 45 degrees.

本领域普通技术人员可以理解为上述实施例所包括的各个模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能模块的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。Those of ordinary skill in the art can understand that the various modules included in the above-mentioned embodiments are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, the specific names of each functional module It is only for the convenience of distinguishing each other, and is not used to limit the protection scope of the present invention.

本领域普通技术人员还可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以在存储于可读取存储介质中,所述的存储介质,包括ROM/RAM等。Those of ordinary skill in the art can also understand that all or part of the steps in the method of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a readable storage medium. Storage media, including ROM/RAM, etc.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (10)

1.一种监控网络中摄像机网络拓扑关系估算方法,其特征在于,所述方法包括:1. a camera network topology relationship estimation method in a surveillance network, is characterized in that, the method comprises: 将监控网络中每台摄像机拍摄到的视频流中的监控场景分解为网格;Decompose the monitoring scene in the video stream captured by each camera in the monitoring network into a grid; 针对每个监控场景,获取所述监控场景中各网格的光流的颜色直方图信息;For each monitoring scene, obtain the color histogram information of the optical flow of each grid in the monitoring scene; 针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果;For each monitoring scene, cluster the grids in the monitoring scene according to the color histogram information of the optical flow of each grid in the monitoring scene, and obtain the semantic region segmentation result of the monitoring scene; 根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系。According to the semantic region segmentation results of each monitoring scene, the network topology relationship among the cameras in the monitoring network is determined. 2.如权利要求1所述的方法,其特征在于,所述获取所述监控场景中各网格的光流的颜色直方图信息具体为:2. The method according to claim 1, wherein said acquiring the color histogram information of the optical flow of each grid in said monitoring scene is specifically: 定义摄像机拍摄到的视频流为In(X),其中X为所述视频流的监控场景中的网格的坐标,X=(x;y)T,所述x为所述网格的横坐标,所述y为所述网格的纵坐标,所述T表示矩阵的转置,所述n为所述视频流包含的视频帧的编号;Define the video stream captured by the camera as I n (X), where X is the coordinate of the grid in the monitoring scene of the video stream, X=(x; y) T , and the x is the horizontal axis of the grid Coordinates, the y is the ordinate of the grid, the T represents the transposition of the matrix, and the n is the number of the video frame included in the video stream; 定义 W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 ) definition W ( x ; p ) = ( 1 + p 1 ) × x + p 3 × the y + p 5 p 2 × x + ( 1 + p 4 ) × the y + p 6 ; - - - ( 1 ) 其中,W表示可变性模板,p=(p1,p2,p3,p4,p5,p6)T,所述p1、p2、p3、p4为0,所述p5、p6为网格的光流信息;Wherein, W represents the variability template, p=(p1, p2, p3, p4, p5, p6) T , the p1, p2, p3, p4 are 0, and the p5, p6 are the optical flow information of the grid; 定义 p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 ) definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 ) 其中,Δp表示两次迭代的p之差,T(x)表示视频流第一帧分解的网格;Among them, Δp represents the difference of p between two iterations, and T(x) represents the decomposed grid of the first frame of the video stream; 按照(3)、(4)、(5)进行迭代,直到满足Δp小于预设阈值ε;Iterate according to (3), (4), and (5) until Δp is less than the preset threshold ε; ▿▿ II == WW (( ΔxΔx ;; pp )) == IxIx ++ pp 55 IyIy ++ pp 66 ;; -- -- -- (( 33 )) 其中,所述Ix表示网格在x轴方向上的梯度图,所述Iy表示网格在y轴方向上的梯度图,所述▽I表示网格在经过可变性模板W(X;p)变换后的梯度图;Wherein, the Ix represents the gradient map of the grid on the x-axis direction, the Iy represents the gradient map of the grid on the y-axis direction, and the ▽I represents the grid passing through the variability template W(X; p) Transformed gradient map; Hh == ΣΣ xx [[ ▿▿ II ]] TT [[ ▿▿ II ]] ;; -- -- -- (( 44 )) ΔpΔp == Hh -- 11 ** ΣΣ xx [[ ▿▿ II ]] TT [[ TT (( xx )) -- II (( WW (( xx ;; pp )) )) ]] ;; -- -- -- (( 55 )) 计算出满足Δp小于预设阈值ε时的p5、p6;Calculate p5 and p6 when Δp is less than the preset threshold ε; 从网格的光流信息的RGB三个分量获取光流得到网格的颜色光流信息;Obtain the optical flow from the RGB three components of the optical flow information of the grid to obtain the color optical flow information of the grid; 根据所述网格的颜色光流信息,计算光流在8个方向的直方图信息,所述光流在8个方向的直方图信息是网格的光流的颜色直方图特征,所述网格的光流的颜色直方图特征包括水平方向上的光流u’b和垂直方向上的光流v’bAccording to the color optical flow information of the grid, calculate the histogram information of the optical flow in 8 directions, the histogram information of the optical flow in 8 directions is the color histogram feature of the optical flow of the grid, the network The color histogram features of the optical flow of the lattice include the optical flow u' b in the horizontal direction and the optical flow v' b in the vertical direction. 3.如权利要求2所述的方法,其特征在于,所述针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域分割结果具体为:3. The method according to claim 2, wherein, for each monitoring scene, the grid in the monitoring scene is processed according to the color histogram information of the optical flow of each grid in the monitoring scene Clustering to obtain the semantic region segmentation result of the monitoring scene is specifically: uu nno == ΣΣ bb ∈∈ rr nno uu ′′ bb ;; -- -- -- (( 66 )) vv nno == ΣΣ bb ∈∈ rr nno vv ′′ bb .. -- -- -- (( 77 )) 4.如权利要求3所述的方法,其特征在于,所述根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系具体为:4. The method according to claim 3, wherein the network topology relationship between each camera in the monitoring network is determined according to the semantic region segmentation result of each monitoring scene as follows: ρρ aa ii ,, aa jj (( ττ )) == EE. [[ aa ii cc ]] EE. [[ aa ii 22 ]] EE. [[ cc 22 ]] ;; -- -- -- (( 88 )) ττ ^^ aa ii ,, aa jj == argarg maxmax ττ ΣΣ ρρ aa ii ,, aa jj (( ττ )) ΓΓ ;; -- -- -- (( 99 )) ΨΨ ii ,, jj == ρρ aa ii ,, aa jj (( ττ )) (( 11 -- ττ ^^ aa ii ,, aa jj )) ;; -- -- -- (( 1010 )) 其中,所述ai表示第一网格的光流的颜色直方图特征,所述第一网格由第一监控场景分解,所述第一监控场景由第一摄像机拍摄,所述aj表示第二网格的光流的颜色直方图特征,所述第二网格由第二监控场景分解,所述第二监控场景由第二摄像机拍摄,所述第一摄像机和第二摄像机为监控网络中任意两台摄像机,所述c表示τ时刻后的第二网格,所述
Figure FDA00003434914200034
表示第一网格的光流的颜色直方图特征和第二网格的光流的颜色直方图特征的关联度,所述
Figure FDA00003434914200033
表示第一网格和第二网格的时移,所述Ψi,j表示第一摄像机和第二摄像机的拓扑关系估计结果。
Wherein, the a i represents the color histogram feature of the optical flow of the first grid, the first grid is decomposed by the first monitoring scene, and the first monitoring scene is captured by the first camera, and the a j represents The color histogram feature of the optical flow of the second grid, the second grid is decomposed by the second monitoring scene, the second monitoring scene is taken by the second camera, and the first camera and the second camera are monitoring network Any two cameras in , the c represents the second grid after time τ, the
Figure FDA00003434914200034
Indicates the degree of correlation between the color histogram feature of the optical flow of the first grid and the color histogram feature of the optical flow of the second grid, the
Figure FDA00003434914200033
represents the time shift of the first grid and the second grid, and the Ψ i,j represents the topological relationship estimation results of the first camera and the second camera.
5.如权利要求2所述的方法,其特征在于,所述8个方向具体为每隔45度一个方向。5. The method according to claim 2, wherein the eight directions are specifically one direction every 45 degrees. 6.一种监控网络中摄像机网络拓扑关系估算系统,其特征在于,所述系统包括:6. A camera network topology estimation system in a surveillance network, characterized in that the system comprises: 分解单元,用于将监控网络中每台摄像机拍摄到的视频流中的监控场景分解为网格;A decomposition unit, configured to decompose the monitoring scene in the video stream captured by each camera in the monitoring network into grids; 获取单元,用于针对每个监控场景,获取所述监控场景中各网格的光流的颜色直方图信息;An acquisition unit, for each monitoring scene, to obtain the color histogram information of the optical flow of each grid in the monitoring scene; 聚类单元,用于针对每个监控场景,根据所述监控场景中各网格的光流的颜色直方图信息对所述监控场景中的网格进行聚类,得到所述监控场景的语义区域A clustering unit, for each monitoring scene, clustering the grids in the monitoring scene according to the color histogram information of the optical flow of each grid in the monitoring scene, to obtain the semantic area of the monitoring scene 分割结果;segmentation result; 确定单元,用于根据每个监控场景的语义区域分割结果确定监控网络中各台摄像机之间的网络拓扑关系。The determining unit is configured to determine the network topology relationship among the cameras in the monitoring network according to the semantic region segmentation result of each monitoring scene. 7.如权利要求6所述的系统,其特征在于,所述获取单元具体用于:7. The system according to claim 6, wherein the acquiring unit is specifically used for: 定义摄像机拍摄到的视频流为In(X),其中X为所述视频流的监控场景中的网格的坐标,X=(x;y)T,所述x为所述网格的横坐标,所述y为所述网格的纵坐标,所述T表示矩阵的转置,所述n为所述视频流包含的视频帧的编号;定义 W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 ) Define the video stream captured by the camera as I n (X), where X is the coordinate of the grid in the monitoring scene of the video stream, X=(x; y) T , and the x is the horizontal axis of the grid Coordinates, the y is the ordinate of the grid, the T represents the transposition of the matrix, and the n is the number of the video frame included in the video stream; definition W ( x ; p ) = ( 1 + p 1 ) × x + p 3 × the y + p 5 p 2 × x + ( 1 + p 4 ) × the y + p 6 ; - - - ( 1 ) 其中,W表示可变性模板,p=(p1,p2,p3,p4,p5,p6)T,所述p1、p2、p3、p4为0,所述p5、p6为网格的光流信息;定义 p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 ) Wherein, W represents the variability template, p=(p1, p2, p3, p4, p5, p6) T , the p1, p2, p3, p4 are 0, and the p5, p6 are the optical flow information of the grid; definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 ) 其中,Δp表示两次迭代的p之差,T(x)表示视频流第一帧分解的网格;Among them, Δp represents the difference of p between two iterations, and T(x) represents the decomposed grid of the first frame of the video stream; 按照(3)、(4)、(5)进行迭代,直到满足Δp小于预设阈值ε;Iterate according to (3), (4), and (5) until Δp is less than the preset threshold ε; ▿▿ II == WW (( ΔxΔx ;; pp )) == IxIx ++ pp 55 IyIy ++ pp 66 ;; -- -- -- (( 33 )) 其中,所述Ix表示网格在x轴方向上的梯度图,所述Iy表示网格在y轴方向上的梯度图,所述▽I表示网格在经过可变性模板W(X;p)变换后的梯度图;Wherein, the Ix represents the gradient map of the grid on the x-axis direction, the Iy represents the gradient map of the grid on the y-axis direction, and the ▽I represents the grid passing through the variability template W(X; p) Transformed gradient map; Hh == ΣΣ xx [[ ▿▿ II ]] TT [[ ▿▿ II ]] ;; -- -- -- (( 44 )) ΔpΔp == Hh -- 11 ** ΣΣ xx [[ ▿▿ II ]] TT [[ TT (( xx )) -- II (( WW (( xx ;; pp )) )) ]] ;; -- -- -- (( 55 )) 计算出满足Δp小于预设阈值ε时的p5、p6;Calculate p5 and p6 when Δp is less than the preset threshold ε; 从网格的光流信息的RGB三个分量获取光流得到网格的颜色光流信息;Obtain the optical flow from the RGB three components of the optical flow information of the grid to obtain the color optical flow information of the grid; 根据所述网格的颜色光流信息,计算光流在8个方向的直方图信息,所述光流在8个方向的直方图信息是网格的光流的颜色直方图特征,所述网格的光流的颜色直方图特征包括水平方向上的光流u’b和垂直方向上的光流v’bAccording to the color optical flow information of the grid, calculate the histogram information of the optical flow in 8 directions, the histogram information of the optical flow in 8 directions is the color histogram feature of the optical flow of the grid, the network The color histogram features of the optical flow of the lattice include the optical flow u' b in the horizontal direction and the optical flow v' b in the vertical direction. 8.如权利要求7所述的系统,其特征在于,所述聚类单元具体用于:8. The system according to claim 7, wherein the clustering unit is specifically used for: uu nno == ΣΣ bb ∈∈ rr nno uu ′′ bb ;; -- -- -- (( 66 )) vv nno == ΣΣ bb ∈∈ rr nno vv ′′ bb .. -- -- -- (( 77 )) 9.如权利要求8所述的系统,其特征在于,所述确定单元具体用于:9. The system according to claim 8, wherein the determining unit is specifically configured to: ρρ aa ii ,, aa jj (( ττ )) == EE. [[ aa ii cc ]] EE. [[ aa ii 22 ]] EE. [[ cc 22 ]] ;; -- -- -- (( 88 )) ττ ^^ aa ii ,, aa jj == argarg maxmax ττ ΣΣ ρρ aa ii ,, aa jj (( ττ )) ΓΓ ;; -- -- -- (( 99 )) ΨΨ ii ,, jj == ρρ aa ii ,, aa jj (( ττ )) (( 11 -- ττ ^^ aa ii ,, aa jj )) ;; -- -- -- (( 1010 )) 其中,所述ai表示第一网格的光流的颜色直方图特征,所述第一网格由第一监控场景分解,所述第一监控场景由第一摄像机拍摄,所述aj表示第二网格的光流的颜色直方图特征,所述第二网格由第二监控场景分解,所述第二监控场景由第二摄像机拍摄,所述第一摄像机和所述第二摄像机为监控网络中的任意两台摄像机,所述c表示τ时刻后的第二网格,所述
Figure FDA00003434914200057
表示第一网格的光流的颜色直方图特征和第二网格的光流的颜色直方图特征的关联度,所述
Figure FDA00003434914200056
表示第一网格和第二网格的时移,所述Ψi,j表示第一摄像机和第二摄像机的拓扑关系估计结果。
Wherein, the ai represents the color histogram feature of the optical flow of the first grid, the first grid is decomposed by the first monitoring scene, and the first monitoring scene is captured by the first camera, and the aj represents the second The color histogram feature of the optical flow of the grid, the second grid is decomposed by the second monitoring scene, the second monitoring scene is taken by the second camera, and the first camera and the second camera are monitoring network Any two cameras in , the c represents the second grid after time τ, the
Figure FDA00003434914200057
Indicates the degree of correlation between the color histogram feature of the optical flow of the first grid and the color histogram feature of the optical flow of the second grid, the
Figure FDA00003434914200056
represents the time shift of the first grid and the second grid, and the Ψ i,j represents the topological relationship estimation results of the first camera and the second camera.
10.如权利要求7所述的系统,其特征在于,所述8个方向具体为每隔45度一个方向。10. The system according to claim 7, wherein the eight directions are specifically one direction every 45 degrees.
CN201310270349.2A 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes Expired - Fee Related CN103325121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310270349.2A CN103325121B (en) 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310270349.2A CN103325121B (en) 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes

Publications (2)

Publication Number Publication Date
CN103325121A true CN103325121A (en) 2013-09-25
CN103325121B CN103325121B (en) 2017-05-17

Family

ID=49193844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310270349.2A Expired - Fee Related CN103325121B (en) 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes

Country Status (1)

Country Link
CN (1) CN103325121B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886089A (en) * 2014-03-31 2014-06-25 吴怀正 Travelling record video concentrating method based on learning
US20150302655A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
CN107292266A (en) * 2017-06-21 2017-10-24 吉林大学 A kind of vehicle-mounted pedestrian area estimation method clustered based on light stream
CN110798654A (en) * 2018-08-01 2020-02-14 华为技术有限公司 Software-defined camera method, system and camera
CN113763435A (en) * 2020-06-02 2021-12-07 精标科技集团股份有限公司 Tracking and Shooting Method Based on Multiple Cameras

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
KR20110034298A (en) * 2009-09-28 2011-04-05 삼성테크윈 주식회사 Surveillance Systems in Storage Area Networks
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
KR20110034298A (en) * 2009-09-28 2011-04-05 삼성테크윈 주식회사 Surveillance Systems in Storage Area Networks
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
常发亮,李江宝: "拓扑模型和特征学习的多摄像机接力跟踪策略", 《吉林大学学报(工学版)》 *
张磊,项学智, 赵春晖: "基于光流场与水平集的运动目标检测", 《计算机应用》 *
申明军,欧阳宁,莫建文,张彤: "多摄像机环境下的目标跟踪", 《现代电子技术》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886089B (en) * 2014-03-31 2017-12-15 吴怀正 Driving recording video concentration method based on study
CN103886089A (en) * 2014-03-31 2014-06-25 吴怀正 Travelling record video concentrating method based on learning
US10008038B2 (en) 2014-04-18 2018-06-26 Magic Leap, Inc. Utilizing totems for augmented or virtual reality systems
US10109108B2 (en) 2014-04-18 2018-10-23 Magic Leap, Inc. Finding new points by render rather than search in augmented or virtual reality systems
US9767616B2 (en) 2014-04-18 2017-09-19 Magic Leap, Inc. Recognizing objects in a passable world model in an augmented or virtual reality system
US11205304B2 (en) 2014-04-18 2021-12-21 Magic Leap, Inc. Systems and methods for rendering user interfaces for augmented or virtual reality
US9761055B2 (en) 2014-04-18 2017-09-12 Magic Leap, Inc. Using object recognizers in an augmented or virtual reality system
US9852548B2 (en) 2014-04-18 2017-12-26 Magic Leap, Inc. Systems and methods for generating sound wavefronts in augmented or virtual reality systems
US9881420B2 (en) 2014-04-18 2018-01-30 Magic Leap, Inc. Inferential avatar rendering techniques in augmented or virtual reality systems
US9911234B2 (en) 2014-04-18 2018-03-06 Magic Leap, Inc. User interface rendering in augmented or virtual reality systems
US9911233B2 (en) 2014-04-18 2018-03-06 Magic Leap, Inc. Systems and methods for using image based light solutions for augmented or virtual reality
US9922462B2 (en) 2014-04-18 2018-03-20 Magic Leap, Inc. Interacting with totems in augmented or virtual reality systems
US9928654B2 (en) 2014-04-18 2018-03-27 Magic Leap, Inc. Utilizing pseudo-random patterns for eye tracking in augmented or virtual reality systems
US9972132B2 (en) 2014-04-18 2018-05-15 Magic Leap, Inc. Utilizing image based light solutions for augmented or virtual reality
US9984506B2 (en) 2014-04-18 2018-05-29 Magic Leap, Inc. Stress reduction in geometric maps of passable world model in augmented or virtual reality systems
US9996977B2 (en) 2014-04-18 2018-06-12 Magic Leap, Inc. Compensating for ambient light in augmented or virtual reality systems
US20150302655A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
US10013806B2 (en) 2014-04-18 2018-07-03 Magic Leap, Inc. Ambient light compensation for augmented or virtual reality
US9766703B2 (en) 2014-04-18 2017-09-19 Magic Leap, Inc. Triangulation of points using known points in augmented or virtual reality systems
US10115233B2 (en) 2014-04-18 2018-10-30 Magic Leap, Inc. Methods and systems for mapping virtual objects in an augmented or virtual reality system
US10043312B2 (en) 2014-04-18 2018-08-07 Magic Leap, Inc. Rendering techniques to find new map points in augmented or virtual reality systems
US10115232B2 (en) * 2014-04-18 2018-10-30 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
US10127723B2 (en) 2014-04-18 2018-11-13 Magic Leap, Inc. Room based sensors in an augmented reality system
US10186085B2 (en) 2014-04-18 2019-01-22 Magic Leap, Inc. Generating a sound wavefront in augmented or virtual reality systems
US10198864B2 (en) 2014-04-18 2019-02-05 Magic Leap, Inc. Running object recognizers in a passable world model for augmented or virtual reality
US10262462B2 (en) 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
US10909760B2 (en) 2014-04-18 2021-02-02 Magic Leap, Inc. Creating a topological map for localization in augmented or virtual reality systems
US10665018B2 (en) 2014-04-18 2020-05-26 Magic Leap, Inc. Reducing stresses in the passable world model in augmented or virtual reality systems
US10825248B2 (en) 2014-04-18 2020-11-03 Magic Leap, Inc. Eye tracking systems and method for augmented or virtual reality
US10846930B2 (en) 2014-04-18 2020-11-24 Magic Leap, Inc. Using passable world model for augmented or virtual reality
CN107292266A (en) * 2017-06-21 2017-10-24 吉林大学 A kind of vehicle-mounted pedestrian area estimation method clustered based on light stream
CN110798654A (en) * 2018-08-01 2020-02-14 华为技术有限公司 Software-defined camera method, system and camera
CN110798654B (en) * 2018-08-01 2021-12-10 华为技术有限公司 Method and system for defining camera by software and camera
US11979686B2 (en) 2018-08-01 2024-05-07 Huawei Technologies Co., Ltd. Method and system for software-defined camera and camera
CN113763435A (en) * 2020-06-02 2021-12-07 精标科技集团股份有限公司 Tracking and Shooting Method Based on Multiple Cameras

Also Published As

Publication number Publication date
CN103325121B (en) 2017-05-17

Similar Documents

Publication Publication Date Title
Wang et al. Semantic line framework-based indoor building modeling using backpacked laser scanning point cloud
CN103325121B (en) Method and system for estimating network topological relations of cameras in monitoring scenes
Peng et al. Drone-based vacant parking space detection
CN103605983B (en) Remnant detection and tracking method
Cao et al. Abnormal crowd motion analysis
CN102436662A (en) Human body target tracking method in nonoverlapping vision field multi-camera network
CN103986910A (en) A method and system for counting passenger flow based on intelligent analysis camera
CN103824070A (en) A Fast Pedestrian Detection Method Based on Computer Vision
CN103793920B (en) Retrograde detection method and its system based on video
CN119495054A (en) Intelligent security monitoring method and system based on image recognition
CN112232333A (en) Real-time passenger flow thermodynamic diagram generation method in subway station
JP2025506086A (en) Remote Intelligence People Flow Statistics Control System Based on Internet of Things (IoT)
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN103530601B (en) A Bayesian Network-Based Deduction Method for Crowd State in Surveillance Blind Area
Shyaa et al. Enhancing real human detection and people counting using YOLOv8
Fehr et al. Counting people in groups
CN104217442A (en) Aerial video moving object detection method based on multiple model estimation
Ermis et al. Abnormal behavior detection and behavior matching for networked cameras
Daramola et al. Automatic vehicle identification system using license plate
Meingast et al. Automatic camera network localization using object image tracks
Tahira et al. Deep Learning based Approach for Crowd Density Estimation and Flow Prediction
Prokaj et al. Using 3d scene structure to improve tracking
Parvathy et al. Anomaly detection using motion patterns computed from optical flow
Ikoma et al. Multi-target tracking in video by SMC-PHD filter with elimination of other targets and state dependent multi-modal likelihoods
Shen et al. Metro pedestrian detection based on mask R-CNN and spatial-temporal feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Applicant after: Bianco robot Co Ltd

Applicant after: Shanghai Zhongke Institute for Advanced Study

Applicant after: Smart City Information Technology Co., Ltd.

Address before: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Applicant before: Anke Smart Cities Technolongy (PRC) Co., Ltd.

Applicant before: Shanghai Zhongke Institute for Advanced Study

Applicant before: Smart City Information Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170517

Termination date: 20180628