[go: up one dir, main page]

CN107330372B - An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System - Google Patents

An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System Download PDF

Info

Publication number
CN107330372B
CN107330372B CN201710411185.9A CN201710411185A CN107330372B CN 107330372 B CN107330372 B CN 107330372B CN 201710411185 A CN201710411185 A CN 201710411185A CN 107330372 B CN107330372 B CN 107330372B
Authority
CN
China
Prior art keywords
optical flow
crowd
image
density
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710411185.9A
Other languages
Chinese (zh)
Other versions
CN107330372A (en
Inventor
何小海
韦招静
吴晓红
卿粼波
熊杰
滕奇志
王正勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201710411185.9A priority Critical patent/CN107330372B/en
Publication of CN107330372A publication Critical patent/CN107330372A/en
Application granted granted Critical
Publication of CN107330372B publication Critical patent/CN107330372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于视频的人群密度与异常行为检测系统的分析方法,涉及智能视频监控、目标检测等领域。方法包括:利用改进的基于像素点统计和纹理统计的方法进行密度分级;利用基于像素点统计和前景角点检测相结合的方法进行人数估计;引入局部光流,提出并实现了一种基于平均动能变化倍率的人群异常行为检测算法。该方法不仅能适用于一般的监控视频,更能适用于较为突出的大型公共场所中人群聚集的视频。本发明的方法系统耗时少,且具有较好的有效性和实用性,满足现实要求。

Figure 201710411185

The invention discloses an analysis method of a video-based crowd density and abnormal behavior detection system, and relates to the fields of intelligent video monitoring, target detection and the like. The methods include: using an improved method based on pixel statistics and texture statistics for density classification; using a combination of pixel statistics and foreground corner detection to estimate the number of people; introducing local optical flow, proposes and implements a method based on average A crowd abnormal behavior detection algorithm based on kinetic energy change magnification. This method is not only applicable to general surveillance video, but also applicable to crowd gathering videos in more prominent large-scale public places. The method system of the present invention is less time-consuming, has better effectiveness and practicability, and meets practical requirements.

Figure 201710411185

Description

一种基于视频的人群密度与异常行为检测系统的分析方法An Analysis Method of Video-Based Crowd Density and Abnormal Behavior Detection System

技术领域technical field

本发明涉及一种公共安全领域中基于视频的人群密度与异常行为检测系统的分析方法,尤其涉及改进的基于像素点统计和纹理统计相结合的人群密度分级方法、基于像素点统计和角点检测结合的人数估计方法、提出并实现了一种基于平均动能变化倍率的人群异常行为检测算法,属于机器视觉与智能信息处理领域。The invention relates to an analysis method of a video-based crowd density and abnormal behavior detection system in the field of public safety, in particular to an improved crowd density classification method based on the combination of pixel point statistics and texture statistics, pixel point statistics and corner point detection The combined population estimation method proposes and implements a crowd abnormal behavior detection algorithm based on the average kinetic energy change rate, which belongs to the field of machine vision and intelligent information processing.

背景技术Background technique

近年来,随着经济的快速发展及人们社会活动的不断增加,交通枢纽、大型活动现场及大型商场等公共场所发生人流拥堵的现象越来越频繁,考虑人群过度拥挤造成的安全隐患越来越严重。因此,为了维护公共秩序,保障人群安全,越来越多的监控系统被投入使用。欧盟的ADVISOR系统就是一个应用于公共交通的安全管理系统;它涵盖了个体行为分析,也包括群体行为分析,还对人机交互方面有了部分涉及。随着市场上对视频监控系统需求的爆炸性增大,开发出一套稳定、快速、多功能的智能监控系统迫在眉睫。In recent years, with the rapid development of the economy and the continuous increase of people's social activities, the phenomenon of traffic congestion in public places such as transportation hubs, large-scale event sites and large shopping malls has become more and more frequent. serious. Therefore, in order to maintain public order and ensure the safety of the crowd, more and more monitoring systems have been put into use. The EU's ADVISOR system is a safety management system applied to public transportation; it covers individual behavior analysis, including group behavior analysis, and also has some involvement in human-computer interaction. With the explosive increase in the demand for video surveillance systems in the market, it is imminent to develop a stable, fast and multi-functional intelligent surveillance system.

基于视频的人群密度与异常行为检测系统的分析方法在技术实现上主要包括人群密度分级、人数估计与异常行为检测三个部分。The analysis method of the video-based crowd density and abnormal behavior detection system mainly includes three parts: crowd density classification, number estimation and abnormal behavior detection.

首先,对视频中的人群密度分级。基于视频的人群密度分级方法大体可分为以下几类:基于像素点统计的人群密度分级方法,基于纹理特征的人群密度分级方法和基于个体特征的人群密度分级方法。基于像素点统计的分级方法实现简单,计算量较小,但由于人群之间相互遮挡而造成最后密度分级结果误差较大,该方法只适用于人群流量密度较低场景。基于纹理特征的分级方法能较好地处理人群之间存在相互重叠、遮挡现象,能取得较好的人群密度分级,但该方法由于提取的特征数量较多,计算量较大,对中低密度场景中人群密度分级误差较大,该方法只适用于人群流量密度较高场景。基于个体特征的方法因其计算复杂度较高,在实际应用场景中很难实现实时监控。First, grade the crowd density in the video. Video-based crowd density classification methods can be roughly divided into the following categories: crowd density classification methods based on pixel statistics, crowd density classification methods based on texture features, and crowd density classification methods based on individual characteristics. The classification method based on pixel point statistics is simple to implement and requires less computation, but the final density classification result has a large error due to the mutual occlusion between crowds. This method is only suitable for scenarios with low crowd density. The classification method based on texture features can better deal with the overlapping and occlusion phenomenon between crowds, and can achieve better crowd density classification. The crowd density classification error in the scene is large, and this method is only suitable for scenes with high crowd flow density. The method based on individual characteristics is difficult to achieve real-time monitoring in practical application scenarios due to its high computational complexity.

其次,对视频中的人数估计。人数检测算法主要分为两类,一类是通过检测或者识别人群目标进行直接计算人数的算法,一类是通过提取人群的特征进行间接计算人数的算法。直接计算法又可以分为基于模型和基于轨迹聚类两种类型,基于模型的主要思路是将人分割出来,然后用人的模型或者形状进行匹配以便进行计数,基于轨迹聚类的算法是通过检测人群中相互独立的运动模式,根据运动模式分别进行计数。间接计算法主要包括基于像素特征的算法和基于特征点的算法。Davie等人在1995年第一次提出运用图像处理技术来检测人群人数,其主要思路是,首先提取出视频图像的前景边缘图像并得到其总的像素数,然后将得到的像素数输入训练得到的线性回归方程计算人数,该方法算法复杂度较低,但是算法准确度不高。在人群密度较高的情况下,人群的拥挤会导致目标的重叠或者遮挡,这时基于像素特征的方法其准确性将大打折扣,故基于像素特征的算法只能在人群密度较低的情况下使用。基于特征点的算法主要思路是,首先提取特征点,如Harris角点、SIFT角点、SURF角点等,然后对角点进行跟踪,经过算法筛选得到随目标移动的点,根据环境训练系统需要的参数并获得回归方程,最后根据前景特征点的数目以及训练好的回归方程,得到总的人群人数。Second, estimate the number of people in the video. The number of people detection algorithms are mainly divided into two categories, one is an algorithm that directly counts the number of people by detecting or identifying crowd targets, and the other is an algorithm that indirectly calculates the number of people by extracting the characteristics of the crowd. The direct calculation method can be divided into two types: model-based and trajectory-based clustering. The main idea of model-based is to segment people, and then use the model or shape of the person to match them for counting. The algorithm based on trajectory clustering is based on detection. The independent movement patterns in the crowd are counted separately according to the movement patterns. The indirect calculation method mainly includes the algorithm based on pixel features and the algorithm based on feature points. Davie et al. first proposed the use of image processing technology to detect the number of people in 1995. The main idea is to first extract the foreground edge image of the video image and obtain its total number of pixels, and then input the obtained number of pixels into training to get The linear regression equation calculates the number of people, the algorithm complexity of this method is low, but the algorithm accuracy is not high. In the case of high crowd density, the crowded crowd will lead to the overlapping or occlusion of targets. At this time, the accuracy of the pixel feature-based method will be greatly reduced. Therefore, the pixel feature-based algorithm can only be used when the crowd density is low. use. The main idea of the algorithm based on feature points is to first extract the feature points, such as Harris corner points, SIFT corner points, SURF corner points, etc., and then track the corner points. parameters and get the regression equation, and finally get the total number of people according to the number of foreground feature points and the trained regression equation.

最后,对视频中的人群进行异常行为检测。人群异常行为检测属于群体行为识别的范畴,目前主要通过人群的运动速度和方向来判断是否有异常行为,不少学者也提出了一些针对人群异常行为检测的算法。Andrade等人首先定义一种“正常群体运动”,然后采用隐马尔科夫模型(Hidden Markov Model,HMM)进行人群行为分析,但是该算法所采用的人群行为描述特征性能较差,故检测准确度较低。Wang等人通过采用改进的时空体特征对群体行为进行描述,但该方法计算复杂度太高难以满足实时性要求。另外,Hassner等人提出了一种基于全局光流信息的人群暴力行为检测算法,该算法检测准确度较高。Liu等人提出用一系列基于主体运动的模型来发现群体运动轨迹,从而识别群体运动,该方法识别效果较好,但是处理速度过慢,不利于实际应用。Finally, abnormal behavior detection is performed on the crowd in the video. The detection of abnormal crowd behavior belongs to the category of group behavior recognition. At present, whether there is abnormal behavior is mainly judged by the movement speed and direction of the crowd. Many scholars have also proposed some algorithms for the detection of abnormal crowd behavior. Andrade et al. first defined a "normal group movement", and then used Hidden Markov Model (HMM) for crowd behavior analysis, but the performance of the crowd behavior description feature used by the algorithm was poor, so the detection accuracy lower. Wang et al. describe group behavior by using improved spatiotemporal volume features, but the computational complexity of this method is too high to meet real-time requirements. In addition, Hassner et al. proposed a crowd violence detection algorithm based on global optical flow information, which has high detection accuracy. Liu et al. proposed to use a series of subject motion-based models to discover group motion trajectories to identify group motions. This method has a good recognition effect, but the processing speed is too slow, which is not conducive to practical applications.

发明内容SUMMARY OF THE INVENTION

本发明的目的就在于为了解决上述问题而提供一种基于视频的人群密度与异常行为检测系统的方法。The purpose of the present invention is to provide a method for a video-based crowd density and abnormal behavior detection system in order to solve the above problems.

本发明通过以下技术方案来实现上述目的:The present invention realizes above-mentioned purpose through following technical scheme:

(1)人群密度分级:(1) Crowd density classification:

将像素点统计和纹理特征相结合,首先用像素点统计法对人群进行预估计,当判断当前的人群密度过高时就采用纹理特征进行人群密度估计,该方法既利用了像素点统计法实现简单、速度快的优点,又能在高密度人群时利用纹理特征法进行进一步的密度分级。Combining pixel point statistics and texture features, first, the pixel point statistics method is used to pre-estimate the crowd. When it is judged that the current crowd density is too high, the texture feature is used to estimate the crowd density. This method not only uses the pixel point statistics method to achieve It has the advantages of simplicity and high speed, and can use the texture feature method for further density classification when there are high-density crowds.

基于像素点统计的人群密度分级:Crowd density classification based on pixel statistics:

(a)先对待处理图像转化为灰度图像,采用文献Canny边缘检测算法对相邻两帧灰度图像进行边缘提取,得到边缘图像;(a) First convert the image to be processed into a grayscale image, and use the literature Canny edge detection algorithm to extract the edges of two adjacent grayscale images to obtain edge images;

(b)将两帧得到的边缘图像进行差分,并通过形态学膨胀和腐蚀操作消除干扰噪声,得到优化后的差分边缘图像;(b) Differentiate the edge images obtained from the two frames, and remove the interference noise through morphological expansion and corrosion operations to obtain an optimized differential edge image;

(c)再将当前帧的边缘图像和优化后的差分边缘图像进行与操作,得到运动前景边缘图像;(c) performing AND operation on the edge image of the current frame and the optimized differential edge image to obtain a moving foreground edge image;

(d)将前景边缘像划分很多个矩形框,计算出每个矩形框中的边缘像素点数,计算出边缘最终的总像素点数;(d) dividing the foreground edge image into a plurality of rectangular frames, calculating the number of edge pixels in each rectangular frame, and calculating the final total pixel number of the edge;

(e)采用线性拟合的方法拟合出人群数量与人群前景边缘总像素点数之间的关系,估计出人群数量,进而估计出大概的人群密度。(e) Using the linear fitting method to fit the relationship between the number of crowds and the total number of pixels on the foreground edge of the crowd, to estimate the number of crowds, and then to estimate the approximate crowd density.

(f)当前人数是否大于阈值Thr1,若否,则输出密度等级,低密度和中密度;若是,则采用基于纹理特征的人群密度等级分级,并输出高密度和极高密度。(f) Whether the current number of people is greater than the threshold Thr1, if not, output the density level, low density and medium density; if so, adopt the crowd density level classification based on texture features, and output high density and extremely high density.

基于纹理特征的人群密度分级,由于图像中的纹理没有规律,大多服从统计分布规律,因此本文选取基于灰度共生矩阵(GLCM)的纹理特征对高密度人群进行分析,并实现不同人群密度的分级。设I(x,y)为原图像,M×N为图像像尺寸为,M,N分别表示图像的宽度和高度,L表示图像灰度级别,根据一定空间关系,得出的灰度共生矩阵为:Crowd density classification based on texture features, because the textures in the image are irregular, most of them obey the statistical distribution law, so this paper selects the texture features based on the gray level co-occurrence matrix (GLCM) to analyze the high-density crowd, and realize the classification of different crowd densities. . Let I(x,y) be the original image, M×N is the size of the image, M, N are the width and height of the image respectively, L is the gray level of the image, according to a certain spatial relationship, the obtained gray level co-occurrence matrix for:

f(i,j)=#{(x1,y1),(x2,y2)∈M×N|I(x1,y1)=i,I(x2,y2)=j} (1)f(i,j)=#{(x 1 ,y 1 ),(x 2 ,y 2 )∈M×N|I(x 1 ,y 1 )=i,I(x 2 ,y 2 )=j } (1)

式(1)中,i和j表示图像灰度值,#表示集合中的元素个数,f为L×L的矩阵,像素点(x1,y1)和(x2,y2)之间的距离为d,两像素点的连线与水平方向的夹角为θ,则生成的灰度共生矩阵为f(i,j|d,θ),其意义是满足图像中出现灰度值分别为i、j,相距距离为d,方向为θ的两个像素点的次数,一般θ=0°,45°,90°,135°。由灰度共生矩阵得到5种纹理参数:对比度、均匀性、能量、熵、相关性。In formula (1), i and j represent the gray value of the image, # represents the number of elements in the set, f is an L×L matrix, and the pixel points (x 1 , y 1 ) and (x 2 , y 2 ) are between The distance between them is d, and the angle between the connection line between the two pixels and the horizontal direction is θ, then the generated grayscale co-occurrence matrix is f(i,j|d,θ), which means that the gray value appears in the image. They are i and j respectively, the distance is d, and the direction is the number of two pixel points of θ, generally θ=0°, 45°, 90°, 135°. Five texture parameters are obtained from the gray level co-occurrence matrix: contrast, uniformity, energy, entropy, and correlation.

对比度

Figure BDA0001312431400000031
Contrast
Figure BDA0001312431400000031

均匀性

Figure BDA0001312431400000032
uniformity
Figure BDA0001312431400000032

能量

Figure BDA0001312431400000033
energy
Figure BDA0001312431400000033

Figure BDA0001312431400000034
entropy
Figure BDA0001312431400000034

相关性

Figure BDA0001312431400000035
Correlation
Figure BDA0001312431400000035

其中,in,

Figure BDA0001312431400000036
Figure BDA0001312431400000036

Figure BDA0001312431400000037
Figure BDA0001312431400000037

Figure BDA0001312431400000038
Figure BDA0001312431400000038

Figure BDA0001312431400000039
Figure BDA0001312431400000039

(2)人数估计:(2) Estimated number of people:

将像素点统计和前景角点检测相结合,首先用像素点统计法对视频中的人数预估计,当判断当前的人数高于阈值Thr1时就采用前景角点检测对视频中的人数进行估计,具体步骤如下:Combining pixel statistics and foreground corner detection, first use pixel statistics to pre-estimate the number of people in the video. When it is judged that the current number of people is higher than the threshold Thr1, the foreground corner detection is used to estimate the number of people in the video. Specific steps are as follows:

基于像素点统计的人数估计如上(1)中所述。The estimation of the number of people based on pixel counts is as described in (1) above.

基于前景点角点检测的人数估计:Population estimation based on foreground point corner detection:

(a)初始化参数,主要是初始化α、β的值,角点之间的最小距离dmin等。(a) Initialization parameters, mainly initializing the values of α and β, the minimum distance d min between the corner points, etc.

(b)角点检测,采用Shi-Tomasi算法获得视频图像的所有角点。(b) Corner detection, using the Shi-Tomasi algorithm to obtain all corners of the video image.

(c)光流跟踪,采用金字塔LK光流法对获取的角点进行有效跟踪。(c) Optical flow tracking, using the pyramid LK optical flow method to effectively track the acquired corners.

(d)提取前景角点,通过计算每个角点处光流幅值剔除背景点,该处光流幅值大于阈值Thr2时,即为前景点。(d) Extract the foreground corner points, and remove the background points by calculating the optical flow amplitude at each corner point. When the optical flow amplitude at this point is greater than the threshold Thr2, it is the foreground point.

(e)检测人群人数,根据前景角点数目,按照初始值和式(11)计算得到人群人数估计值Nest(e) Detect the number of people in the crowd, and according to the number of foreground corners, calculate the estimated number of crowds N est according to the initial value and formula (11).

Nest=(αn+βn)/2 (11)N est =(αn+βn)/2 (11)

其中,视频图像中检测到的前景角点的数目为n,人群目标人数为N,为了减少图像中角度造成的影响,定义人群人数的下限值为αn,上限值为βn,α、β为适当正数,且α≤β根据大量数据统计发现,人群人数N一般不大于前景角点数目n,故有β≤1。α的大小与人群个体所占图像面积比重大小有关,所以一般地0.5<α≤1。Among them, the number of foreground corner points detected in the video image is n, and the number of target crowds is N. In order to reduce the influence caused by the angle in the image, the lower limit of the number of crowds is defined as αn, the upper limit is βn, α, β It is an appropriate positive number, and α≤βAccording to a large number of data statistics, it is found that the number of people N is generally not greater than the number of foreground corners n, so β≤1. The size of α is related to the proportion of the image area occupied by individuals in the crowd, so generally 0.5<α≤1.

(3)异常行为检测(3) Abnormal behavior detection

一般来说,当人群出现异常行为时,人的运动动能的变化将比正常情况下剧烈得多,因此可以通过分析人体的运动动能的变化来判断人群的行为状态。而人的运动动能一般与其光流的幅值量呈正相关,因此可以用光流幅值来表征人的运动动能,根据某点的光流可以得到光流幅值,结合帧间的运动信息可以得到光流幅值变化量,于是我们可以计算人群的平均光流幅值,再根据帧间信息进而可以得到平均光流幅值变化的情况。Generally speaking, when the crowd behaves abnormally, the change of human kinetic energy will be much more severe than normal. Therefore, the behavior state of the crowd can be judged by analyzing the change of human kinetic energy. The kinetic energy of human motion is generally positively correlated with the amplitude of its optical flow. Therefore, the amplitude of optical flow can be used to represent the kinetic energy of human motion. According to the optical flow at a certain point, the optical flow amplitude can be obtained. Combined with the motion information between frames, the The variation of the optical flow amplitude is obtained, so we can calculate the average optical flow amplitude of the crowd, and then according to the inter-frame information, we can obtain the change of the average optical flow amplitude.

假定视频图像中的某个像素点px,y在t时刻的光流向量为(ux,y,t,vx,y,t),其中,ux,y,t、vx,y,t分别为t时刻px,y在x和y方向的光流分量,该点t时刻的光流幅值表示如下:Assume that the optical flow vector of a pixel p x,y in the video image at time t is (u x,y,t ,v x,y,t ), where u x,y,t ,v x,y , t are the optical flow components of p x and y in the x and y directions at time t, respectively, and the optical flow amplitude at time t at this point is expressed as follows:

Figure BDA0001312431400000041
Figure BDA0001312431400000041

求px,y处相邻帧间的光流幅值变化量,然后依据设定的阈值mean将其二值化,设二值化后的光流幅值变化量为bx,y,t,二值化规则表示如下:Find the variation of the optical flow amplitude between adjacent frames at p x, y , and then binarize it according to the set threshold mean, and set the variation of the optical flow amplitude after binarization as b x, y, t , the binarization rule is expressed as follows:

Figure BDA0001312431400000042
Figure BDA0001312431400000042

阈值mean为帧间光流幅值变化量的平均值,mean依据式(14)得到。The threshold mean is the average value of the variation of the optical flow amplitude between frames, and the mean is obtained according to formula (14).

Figure BDA0001312431400000043
Figure BDA0001312431400000043

其中,row、cols分别代表视频图像的行数和列数。Among them, row and cols represent the number of rows and columns of the video image, respectively.

接着计算同一像素点在不同时刻的二值化后的光流幅值变化量平均值,假定该像素点的二值化后的光流幅值平均变化量为

Figure BDA0001312431400000044
依据式(15)得到。Then calculate the average change of the binarized optical flow amplitude of the same pixel at different times, assuming that the average change of the binarized optical flow amplitude of the pixel is
Figure BDA0001312431400000044
Obtained according to formula (15).

Figure BDA0001312431400000045
Figure BDA0001312431400000045

由以上论述可知,通过光流信息计算得到每一个像素点的光流幅值,进而可以计算每帧的光流幅值平均值。在这里设定当前帧的光流幅值平均值、动能平均值分别为为

Figure BDA0001312431400000046
前一帧光流幅值平均值、动能平均值分别为
Figure BDA0001312431400000047
用式(16)表示动能变化与光流幅值量变化的关系。It can be seen from the above discussion that the optical flow amplitude value of each pixel is obtained by calculating the optical flow information, and then the average value of the optical flow amplitude value of each frame can be calculated. Here, the average value of the optical flow amplitude and the average value of kinetic energy of the current frame are set as
Figure BDA0001312431400000046
The average value of the optical flow amplitude and the average value of kinetic energy in the previous frame are respectively
Figure BDA0001312431400000047
Formula (16) is used to express the relationship between the change of kinetic energy and the change of the amplitude of the optical flow.

Figure BDA0001312431400000051
Figure BDA0001312431400000051

其中,k为一般正数,λ表示平均动能变化的倍率,当λ超过阈值Thr2时,当前人群运动有异常行为。Among them, k is a general positive number, λ represents the change rate of the average kinetic energy, when λ exceeds the threshold Thr2, the current crowd movement has abnormal behavior.

综上,我们可以通过计算图像中每群人的平均动能变化倍率,再根据阈值Thr2来判断当前人群是否有异常行为。如果有异常行为,保存第一帧异常帧的前一帧的平均光流幅值,以便求后续帧与该帧的平均动能变化倍率,从而判断后续帧是否有异常。In summary, we can calculate the average kinetic energy change rate of each group of people in the image, and then judge whether the current crowd has abnormal behavior according to the threshold Thr2. If there is abnormal behavior, save the average optical flow amplitude of the previous frame of the first abnormal frame, so as to find the average kinetic energy change ratio of the subsequent frame and this frame, so as to determine whether the subsequent frame is abnormal.

本发明的有益效果在于:The beneficial effects of the present invention are:

本发明设计了一种基于视频的人群密度与异常行为检测系统,针对较为突出的大型公共场所中聚集人群的视频,该系统能够检测出人群密度等级、估计出人数、检测人群状态正常与否,具有检测准确的优点,并且对于异常人群能实施报警,本发明方法易于在软件平台上实现且运算量小,实时性高。The present invention designs a video-based crowd density and abnormal behavior detection system, which can detect crowd density levels, estimate the number of people, and detect whether the crowd is normal or not, aiming at the videos of crowds gathered in relatively prominent large-scale public places. It has the advantages of accurate detection and can implement alarm for abnormal crowds. The method of the present invention is easy to be implemented on a software platform, with small computation amount and high real-time performance.

附图说明Description of drawings

图1是本发明所述基于视频的人群密度与异常行为检测系统的整体流程图。FIG. 1 is an overall flow chart of the video-based crowd density and abnormal behavior detection system according to the present invention.

具体实施方式Detailed ways

下面结合附图对本发明作进一步说明:The present invention will be further described below in conjunction with the accompanying drawings:

如图1所示,图中的流程是以下方法的主要过程简述,采用以下步骤对用于监测的视频图像进行分析:As shown in Figure 1, the flow in the figure is a brief description of the main process of the following method, and the following steps are used to analyze the video images used for monitoring:

(1)人群密度分级:(1) Crowd density classification:

将像素点统计和纹理特征相结合,首先用像素点统计法对人群进行预估计,当判断当前的人群密度过高时就采用纹理特征进行人群密度估计,该方法既利用了像素点统计法实现简单、速度快的优点,又能在高密度人群时利用纹理特征法进行进一步的密度分级。Combining pixel point statistics and texture features, first, the pixel point statistics method is used to pre-estimate the crowd. When it is judged that the current crowd density is too high, the texture feature is used to estimate the crowd density. This method not only uses the pixel point statistics method to achieve It has the advantages of simplicity and high speed, and can use the texture feature method for further density classification when there are high-density crowds.

基于像素点统计的人群密度分级:Crowd density classification based on pixel statistics:

(a)先对待处理图像转化为灰度图像,采用文献Canny边缘检测算法对相邻两帧灰度图像进行边缘提取,得到边缘图像;(a) First convert the image to be processed into a grayscale image, and use the literature Canny edge detection algorithm to extract the edges of two adjacent grayscale images to obtain edge images;

(b)将两帧得到的边缘图像进行差分,并通过形态学膨胀和腐蚀操作消除干扰噪声,得到优化后的差分边缘图像;(b) Differentiate the edge images obtained from the two frames, and remove the interference noise through morphological expansion and corrosion operations to obtain an optimized differential edge image;

(c)再将当前帧的边缘图像和优化后的差分边缘图像进行与操作,得到运动前景边缘图像;(c) performing AND operation on the edge image of the current frame and the optimized differential edge image to obtain a moving foreground edge image;

(d)将前景边缘像划分很多个矩形框,计算出每个矩形框中的边缘像素点数,计算出边缘最终的总像素点数;(d) dividing the foreground edge image into a plurality of rectangular frames, calculating the number of edge pixels in each rectangular frame, and calculating the final total pixel number of the edge;

(e)采用线性拟合的方法拟合出人群数量与人群前景边缘总像素点数之间的关系,估计出人群数量,进而估计出大概的人群密度。(e) Using the linear fitting method to fit the relationship between the number of crowds and the total number of pixels on the foreground edge of the crowd, to estimate the number of crowds, and then to estimate the approximate crowd density.

(f)当前人数是否大于阈值Thr1,若否,则输出密度等级,低密度和中密度;若是,则采用基于纹理特征的人群密度等级分级,并输出高密度和极高密度。(f) Whether the current number of people is greater than the threshold Thr1, if not, output the density level, low density and medium density; if so, adopt the crowd density level classification based on texture features, and output high density and extremely high density.

基于纹理特征的人群密度分级,由于图像中的纹理没有规律,大多服从统计分布规律,因此本文选取基于灰度共生矩阵(GLCM)的纹理特征对高密度人群进行分析,并实现不同人群密度的分级。设I(x,y)为原图像,M×N为图像像尺寸为,M,N分别表示图像的宽度和高度,L表示图像灰度级别,根据一定空间关系,得出的灰度共生矩阵为:Crowd density classification based on texture features, because the textures in the image are irregular, most of them obey the statistical distribution law, so this paper selects the texture features based on the gray level co-occurrence matrix (GLCM) to analyze the high-density crowd, and realize the classification of different crowd densities. . Let I(x,y) be the original image, M×N is the size of the image, M, N are the width and height of the image respectively, L is the gray level of the image, according to a certain spatial relationship, the obtained gray level co-occurrence matrix for:

f(i,j)=#{(x1,y1),(x2,y2)∈M×N|I(x1,y1)=i,I(x2,y2)=j} (1)f(i,j)=#{(x 1 ,y 1 ),(x 2 ,y 2 )∈M×N|I(x 1 ,y 1 )=i,I(x 2 ,y 2 )=j } (1)

式(1)中,i和j表示图像灰度值,#表示集合中的元素个数,f为L×L的矩阵,像素点(x1,y1)和(x2,y2)之间的距离为d,两像素点的连线与水平方向的夹角为θ,则生成的灰度共生矩阵为f(i,j|d,θ),其意义是满足图像中出现灰度值分别为i、j,相距距离为d,方向为θ的两个像素点的次数,一般θ=0°,45°,90°,135°。由灰度共生矩阵得到5种纹理参数:对比度、均匀性、能量、熵、相关性。In formula (1), i and j represent the gray value of the image, # represents the number of elements in the set, f is an L×L matrix, and the pixel points (x 1 , y 1 ) and (x 2 , y 2 ) are between The distance between them is d, and the angle between the connection line between the two pixels and the horizontal direction is θ, then the generated grayscale co-occurrence matrix is f(i,j|d,θ), which means that the gray value appears in the image. They are i and j respectively, the distance is d, and the direction is the number of two pixel points of θ, generally θ=0°, 45°, 90°, 135°. Five texture parameters are obtained from the gray level co-occurrence matrix: contrast, uniformity, energy, entropy, and correlation.

对比度

Figure BDA0001312431400000061
Contrast
Figure BDA0001312431400000061

均匀性

Figure BDA0001312431400000062
uniformity
Figure BDA0001312431400000062

能量

Figure BDA0001312431400000063
energy
Figure BDA0001312431400000063

Figure BDA0001312431400000064
entropy
Figure BDA0001312431400000064

相关性

Figure BDA0001312431400000065
Correlation
Figure BDA0001312431400000065

其中,in,

Figure BDA0001312431400000066
Figure BDA0001312431400000066

Figure BDA0001312431400000067
Figure BDA0001312431400000067

Figure BDA0001312431400000068
Figure BDA0001312431400000068

Figure BDA0001312431400000069
Figure BDA0001312431400000069

(2)人数估计:(2) Estimated number of people:

将像素点统计和前景角点检测相结合,首先用像素点统计法对视频中的人数预估计,当判断当前的人数高于阈值Thr1时就采用前景角点检测对视频中的人数进行估计,具体步骤如下:Combining pixel statistics and foreground corner detection, first use pixel statistics to pre-estimate the number of people in the video. When it is judged that the current number of people is higher than the threshold Thr1, the foreground corner detection is used to estimate the number of people in the video. Specific steps are as follows:

基于像素点统计的人数估计如上(1)中所述。The estimation of the number of people based on pixel counts is as described in (1) above.

基于前景点角点检测的人数估计:Population estimation based on foreground point corner detection:

(a)初始化参数,主要是初始化α、β的值,角点之间的最小距离dmin等。(a) Initialization parameters, mainly initializing the values of α and β, the minimum distance d min between the corner points, etc.

(b)角点检测,采用Shi-Tomasi算法获得视频图像的所有角点。(b) Corner detection, using the Shi-Tomasi algorithm to obtain all corners of the video image.

(c)光流跟踪,采用金字塔LK光流法对获取的角点进行有效跟踪。(c) Optical flow tracking, using the pyramid LK optical flow method to effectively track the acquired corners.

(d)提取前景角点,通过计算每个角点处光流幅值剔除背景点,该处光流幅值大于阈值Thr2时,即为前景点。(d) Extract the foreground corner points, and remove the background points by calculating the optical flow amplitude at each corner point. When the optical flow amplitude at this point is greater than the threshold Thr2, it is the foreground point.

(e)检测人群人数,根据前景角点数目,按照初始值和式(11)计算得到人群人数估计值Nest(e) Detect the number of people in the crowd, and according to the number of foreground corners, calculate the estimated number of crowds N est according to the initial value and formula (11).

Nest=(αn+βn)/2 (11)N est =(αn+βn)/2 (11)

其中,视频图像中检测到的前景角点的数目为n,人群目标人数为N,为了减少图像中角度造成的影响,定义人群人数的下限值为αn,上限值为βn,α、β为适当正数,且α≤β根据大量数据统计发现,人群人数N一般不大于前景角点数目n,故有β≤1。α的大小与人群个体所占图像面积比重大小有关,所以一般地0.5<α≤1。Among them, the number of foreground corner points detected in the video image is n, and the number of target crowds is N. In order to reduce the influence caused by the angle in the image, the lower limit of the number of crowds is defined as αn, the upper limit is βn, α, β It is an appropriate positive number, and α≤βAccording to a large number of data statistics, it is found that the number of people N is generally not greater than the number of foreground corners n, so β≤1. The size of α is related to the proportion of the image area occupied by individuals in the crowd, so generally 0.5<α≤1.

(3)异常行为检测(3) Abnormal behavior detection

假定视频图像中的某个像素点px,y在t时刻的光流向量为(ux,y,t,vx,y,t),其中,ux,y,t、vx,y,t分别为t时刻px,y在x和y方向的光流分量,该点t时刻的光流幅值表示如下:Assume that the optical flow vector of a pixel p x,y in the video image at time t is (u x,y,t ,v x,y,t ), where u x,y,t ,v x,y , t are the optical flow components of p x and y in the x and y directions at time t, respectively, and the optical flow amplitude at time t at this point is expressed as follows:

Figure BDA0001312431400000071
Figure BDA0001312431400000071

求px,y处相邻帧间的光流幅值变化量,然后依据设定的阈值mean将其二值化,设二值化后的光流幅值变化量为bx,y,t,二值化规则表示如下:Find the variation of the optical flow amplitude between adjacent frames at p x, y , and then binarize it according to the set threshold mean, and set the variation of the optical flow amplitude after binarization as b x, y, t , the binarization rule is expressed as follows:

Figure BDA0001312431400000072
Figure BDA0001312431400000072

阈值mean为帧间光流幅值变化量的平均值,mean依据式(14)得到。The threshold mean is the average value of the variation of the optical flow amplitude between frames, and the mean is obtained according to formula (14).

Figure BDA0001312431400000073
Figure BDA0001312431400000073

其中,row、cols分别代表视频图像的行数和列数。Among them, row and cols represent the number of rows and columns of the video image, respectively.

接着计算同一像素点在不同时刻的二值化后的光流幅值变化量平均值,假定该像素点的二值化后的光流幅值平均变化量为

Figure BDA0001312431400000074
依据式(15)得到。Then calculate the average change of the binarized optical flow amplitude of the same pixel at different times, assuming that the average change of the binarized optical flow amplitude of the pixel is
Figure BDA0001312431400000074
Obtained according to formula (15).

Figure BDA0001312431400000075
Figure BDA0001312431400000075

由以上论述可知,通过光流信息计算得到每一个像素点的光流幅值,进而可以计算每帧的光流幅值平均值。在这里设定当前帧的光流幅值平均值、动能平均值分别为为

Figure BDA0001312431400000076
前一帧光流幅值平均值、动能平均值分别为
Figure BDA0001312431400000077
用式(16)表示动能变化与光流幅值量变化的关系。It can be seen from the above discussion that the optical flow amplitude value of each pixel is obtained by calculating the optical flow information, and then the average value of the optical flow amplitude value of each frame can be calculated. Here, the average value of the optical flow amplitude and the average value of kinetic energy of the current frame are set as
Figure BDA0001312431400000076
The average value of the optical flow amplitude and the average value of kinetic energy in the previous frame are respectively
Figure BDA0001312431400000077
Formula (16) is used to express the relationship between the change of kinetic energy and the change of the amplitude of the optical flow.

Figure BDA0001312431400000078
Figure BDA0001312431400000078

其中,k为一般正数,λ表示平均动能变化的倍率,当λ超过阈值Thr2时,当前人群运动有异常行为。Among them, k is a general positive number, λ represents the change rate of the average kinetic energy, when λ exceeds the threshold Thr2, the current crowd movement has abnormal behavior.

综上,我们可以通过计算图像中每群人的平均动能变化倍率,再根据阈值Thr2来判断当前人群是否有异常行为。如果有异常行为,保存第一帧异常帧的前一帧的平均光流幅值,以便求后续帧与该帧的平均动能变化倍率,从而判断后续帧是否有异常。In summary, we can calculate the average kinetic energy change rate of each group of people in the image, and then judge whether the current crowd has abnormal behavior according to the threshold Thr2. If there is abnormal behavior, save the average optical flow amplitude of the previous frame of the first abnormal frame, so as to find the average kinetic energy change ratio of the subsequent frame and this frame, so as to determine whether the subsequent frame is abnormal.

为了验证本发明提出的基于视频的人群密度与异常行为检测系统的有效性和可行性,下面通过实验进行详细的分析比较:In order to verify the validity and feasibility of the video-based crowd density and abnormal behavior detection system proposed by the present invention, detailed analysis and comparison are carried out through experiments below:

由表1定义人群密度与人数的关系。本实验在公共视频Grand Central及自行拍摄的视频上进行了测试,密度等级分级准确性如表2所示,人数估计的平均相对误差如表3所示,异常行为检测的正检率、漏检率、虚警率如表4所示,本发明视频分析算法处理每帧视频的平均耗时如表5所示,从表2、表3、表4、表5可以看出采用本发明所提出分析方法能得到较好的结果,同时实验结果也验证了本发明所述分析方法的有效性和可行性。The relationship between population density and number of people is defined by Table 1. This experiment was tested on the public video Grand Central and the self-shot video. The accuracy of density grade classification is shown in Table 2, and the average relative error of population estimation is shown in Table 3. The positive detection rate and missed detection of abnormal behavior detection are shown in Table 2. The rate and false alarm rate are shown in Table 4, and the average time consumption of the video analysis algorithm of the present invention to process each frame of video is shown in Table 5. From Table 2, Table 3, Table 4, and Table 5, it can be seen that the proposed method of the present invention is adopted. The analysis method can obtain good results, and the experimental results also verify the validity and feasibility of the analysis method of the present invention.

表1人群密度等级定义Table 1 Definition of population density class

密度等级density class 低密度low density 中密度medium density 高密度high density 极高密度very high density 人数number of people 0-200-20 21-5021-50 51-15051-150 >150>150

表2人群密度等级分级准确率Table 2 The accuracy rate of population density grade classification

密度等级density class 本发明算法密度分级准确率The density classification accuracy rate of the algorithm of the present invention 低密度low density 91.1%91.1% 中密度medium density 90.2%90.2% 高密度high density 79.6%79.6% 极高密度very high density 86.5%86.5%

表3人数估计平均相对误差Table 3 Average relative error of population estimates

算法algorithm 平均相对误差mean relative error 本发明人数估计算法The number estimation algorithm of the present invention 16.82%16.82%

表4异常行为检测正检率、漏检率与虚警率Table 4 Positive detection rate, missed detection rate and false alarm rate of abnormal behavior detection

算法algorithm 异常行为正检率Positive rate of abnormal behavior 异常行为漏检率Abnormal behavior missed detection rate 异常行为虚警率Abnormal behavior false alarm rate 本发明异常行为检测算法The abnormal behavior detection algorithm of the present invention 92.12%92.12% 7.88%7.88% 3.18%3.18%

表5系统平均耗时(处理每帧)Table 5 The average time-consuming of the system (processing each frame)

算法algorithm 系统平均耗时The average time of the system 本发明的视频分析方法The video analysis method of the present invention 101.84ms/frame101.84ms/frame

Claims (3)

1. An analysis method of a video-based crowd density and abnormal behavior detection system comprises the following steps: the method comprises the following steps:
(1) grading the population density: classifying the crowd density in the video by a method combining pixel point statistics and textural features;
(2) estimating the number of people: estimating the number of people in the video by combining the estimation of the number of people based on the statistics of the pixel points and the estimation of the number of people based on the angular point detection, firstly, pre-estimating the number of people in the video by using a pixel point statistical method, and estimating the number of people in the video by using the foreground angular point detection when the current number of people is judged to be higher than a threshold Thr 1;
estimating the number of people based on foreground corner detection:
(a) initializing parameters, and initializing values of alpha and beta;
(b) detecting angular points, namely acquiring all angular points of the video image by adopting a Shi-Tomasi algorithm;
(c) optical flow tracking, namely effectively tracking the acquired corner points by adopting a pyramid LK optical flow method;
(d) extracting foreground angular points, and eliminating background points by calculating the optical flow amplitude of each angular point, wherein when the optical flow amplitude is greater than a threshold Thr3, the foreground points are obtained;
(e) detecting the number of people, and calculating according to the number of the foreground angular points and the initial value and the formula (1) to obtain an estimated value N of the number of peopleest
Nest=(αn+βn)/2 (1)
The method comprises the steps that the number of foreground angular points detected in a video image is N, the number of people as a crowd target is N, in order to reduce the influence caused by angles in the image, the lower limit value of the number of people as alpha N is defined, the upper limit value as beta N, alpha and beta are proper positive numbers, alpha is less than or equal to beta, and according to statistics of a large amount of data, the number of people N is generally not greater than the number N of the foreground angular points, so that beta is less than or equal to 1; the size of alpha is related to the specific gravity of the image area occupied by the individual crowd, so that the alpha is more than 0.5 and less than or equal to 1;
(3) and (3) abnormal behavior detection: generally, when the crowd has abnormal behaviors, the change of the kinetic energy of the human body is much more drastic than that under normal conditions, so the behavior state of the crowd can be judged by analyzing the change of the kinetic energy of the human body; the motion kinetic energy of the person is generally in positive correlation with the amplitude quantity of the optical flow of the person, so that the motion kinetic energy of the person can be represented by the optical flow amplitude, the optical flow amplitude can be obtained according to the optical flow of a certain point, the variation quantity of the optical flow amplitude can be obtained by combining motion information among frames, the average optical flow amplitude of the crowd can be calculated, then the variation quantity of the average optical flow amplitude can be obtained according to the information among the frames, and then the variation multiplying power of the average kinetic energy is calculated according to the variation quantity of the average optical flow amplitude of the previous frame; a threshold Thr2 is set, and whether abnormal behavior occurs is judged according to the relation between the image average kinetic energy change multiplying power and Thr 2.
2. The method of claim 1, wherein the system further comprises a video-based crowd density and abnormal behavior detection system, wherein the video-based crowd density and abnormal behavior detection system comprises: in the step (1), the method for classifying the crowd density comprises the following steps:
combining pixel point statistics with textural features, firstly, carrying out pre-estimation on the crowd by using a pixel point statistical method, and carrying out crowd density estimation by using the textural features when the current crowd density is judged to be overhigh, wherein the method comprises the following specific steps:
classifying the crowd density based on pixel point statistics:
(a) firstly, converting an image to be processed into a gray image, and performing edge extraction on two adjacent frames of gray images by adopting a Canny edge detection algorithm to obtain an edge image;
(b) differentiating the edge images obtained by the two frames, and eliminating interference noise through morphological expansion and corrosion operation to obtain an optimized differential edge image;
(c) performing AND operation on the edge image of the current frame and the optimized differential edge image to obtain a moving foreground edge image;
(d) dividing the foreground edge image into a plurality of rectangular frames, calculating the number of edge pixel points in each rectangular frame, and calculating the final total pixel point number of the edge;
(e) fitting the relation between the number of the crowd and the total pixel points of the crowd foreground edge by adopting a linear fitting method, estimating the number of the crowd, and further estimating the approximate crowd density;
(f) whether the current number of people is greater than a threshold Thr1 or not, if not, outputting a density grade, a low density and a medium density; if so, classifying the crowd density grade based on the texture characteristics, and outputting high density and extremely high density;
grading the crowd density based on the textural features:
assuming that I (x, y) is an original image, M multiplied by N is an image size, M and N respectively represent the width and height of the image, L represents the gray level of the image, and the obtained gray co-occurrence matrix is as follows according to a certain spatial relationship:
f(i,j)=#{(x1,y1),(x2,y2)∈M×N|I(x1,y1)=i,I(x2,y2)=j} (2)
in the formula (2), i and j represent image gray values, # represents the number of elements in the set, and f is an L × L matrix, when a pixel point (x)1,y1) And (x)2,y2) The distance between the two pixels is d, the included angle between the connecting line of the two pixels and the horizontal direction is theta, and the generated gray level co-occurrence matrix is f (i, j | d, theta), which means that the gray level values appearing in the image are i and j respectively, the distance between the two pixels is d, the direction is theta, the number of times of the two pixels is theta, theta is 0 degrees, 45 degrees, 90 degrees, 135 degrees, and 5 texture parameters are obtained by the gray level co-occurrence matrix: contrast, homogeneity, energy, entropy, correlation.
3. The method of claim 1, wherein the system further comprises a video-based crowd density and abnormal behavior detection system, wherein the video-based crowd density and abnormal behavior detection system comprises: in the step (3), the method for detecting the abnormal behavior comprises the following steps:
suppose a certain pixel point p in a video imagex,yThe optical flow vector at time t is (u)x,y,t,vx,y,t) Wherein u isx,y,t、vx,y,tRespectively at time t px,yThe optical flow component in the x and y directions, the optical flow magnitude at this point in time t, is represented as follows:
Figure FDA0003024187260000021
finding px,yAnd (3) processing the optical flow amplitude variation between adjacent frames, then binarizing the optical flow amplitude variation according to a set threshold mean, and setting the optical flow amplitude variation after binarization as bx,y,tThe binarization rule is expressed as follows:
Figure FDA0003024187260000031
the threshold mean is the average value of the amplitude variation of the interframe optical flow, and is obtained according to the formula (5);
Figure FDA0003024187260000032
wherein row and cols respectively represent the number of rows and columns of the video image;
then calculating the average value of the optical flow amplitude variation values of the same pixel point after binaryzation at different moments, and assuming that the average variation value of the optical flow amplitude values of the pixel point after binaryzation is equal to
Figure FDA0003024187260000033
Figure FDA0003024187260000034
Obtained according to formula (6);
Figure FDA0003024187260000035
the conclusion deduced by the formula shows that the optical flow amplitude of each pixel point is calculated through the optical flow information, and then the average value of the optical flow amplitude of each frame can be calculated; the average value of the optical flow amplitude and the average value of the kinetic energy of the current frame are respectively set as
Figure FDA0003024187260000036
Average value of light stream amplitude and kinetic energy of previous frameAverage values are respectively
Figure FDA0003024187260000037
The relationship between the change in kinetic energy and the change in the magnitude of optical flow is expressed by equation (7):
Figure FDA0003024187260000038
wherein k is a positive number, λ represents the multiplying power of the average kinetic energy change, and when λ exceeds a threshold Thr2, the current crowd moves abnormally;
in conclusion, whether the current crowd has abnormal behaviors or not is judged by calculating the average kinetic energy change multiplying power of each crowd in the image and according to the threshold Thr 2; if the abnormal behavior exists, the average optical flow amplitude of the previous frame of the first frame abnormal frame is saved so as to obtain the average kinetic energy change multiplying power of the subsequent frame and the frame, and therefore whether the subsequent frame is abnormal or not is judged.
CN201710411185.9A 2017-06-05 2017-06-05 An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System Active CN107330372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710411185.9A CN107330372B (en) 2017-06-05 2017-06-05 An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710411185.9A CN107330372B (en) 2017-06-05 2017-06-05 An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System

Publications (2)

Publication Number Publication Date
CN107330372A CN107330372A (en) 2017-11-07
CN107330372B true CN107330372B (en) 2021-05-28

Family

ID=60194147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710411185.9A Active CN107330372B (en) 2017-06-05 2017-06-05 An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System

Country Status (1)

Country Link
CN (1) CN107330372B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895080B (en) * 2017-11-14 2021-06-29 西安建筑科技大学 Dynamic load estimation and fresh air volume control method for large-space building air-conditioning based on image information fusion
CN107777498B (en) * 2017-11-20 2019-07-19 江苏省特种设备安全监督检验研究院 A method for detecting violent behavior in an elevator car
CN108257149A (en) * 2017-12-25 2018-07-06 翟玉婷 A kind of Ship Target real-time tracking detection method based on optical flow field
CN108399388A (en) * 2018-02-28 2018-08-14 福州大学 A kind of middle-high density crowd quantity statistics method
CN108717549A (en) * 2018-04-26 2018-10-30 东华大学 Crowd density analysis method based on unmanned plane vision and support vector cassification
CN108764203A (en) * 2018-06-06 2018-11-06 四川大学 A kind of pedestrian's quantitative analysis and display systems towards urban planning
CN109063578B (en) * 2018-07-06 2022-05-06 江西洪都航空工业集团有限责任公司 Crowd gathering detection method
CN108848348A (en) * 2018-07-12 2018-11-20 西南科技大学 A kind of crowd's abnormal behaviour monitoring device and method based on unmanned plane
CN109299723A (en) * 2018-09-18 2019-02-01 四川大学 A railway freight car running monitoring system
CN111325073B (en) * 2018-12-17 2024-02-20 上海交通大学 Abnormal behavior detection method in surveillance video based on motion information clustering
CN110084112B (en) * 2019-03-20 2022-09-20 太原理工大学 Traffic jam judging method based on image processing
CN110070003B (en) * 2019-04-01 2021-07-30 浙江大华技术股份有限公司 Abnormal behavior detection and optical flow autocorrelation determination method and related device
CN110263619A (en) * 2019-04-30 2019-09-20 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and computer storage medium
CN112347814B (en) * 2019-08-07 2025-08-05 中兴通讯股份有限公司 Passenger flow estimation and display method, system and computer-readable storage medium
CN110781723B (en) * 2019-09-05 2022-09-02 杭州视鑫科技有限公司 Group abnormal behavior identification method
CN110991375B (en) * 2019-12-10 2020-12-15 北京航空航天大学 A group behavior analysis method and device
CN111062337B (en) * 2019-12-19 2023-08-04 北京迈格威科技有限公司 People stream direction detection method and device, storage medium and electronic equipment
CN111539301B (en) * 2020-04-20 2023-04-18 贵州安防工程技术研究中心有限公司 Scene chaos degree discrimination method based on video analysis technology
CN111680567B (en) * 2020-05-12 2023-08-29 深圳数联天下智能科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111831373A (en) * 2020-06-09 2020-10-27 上海容易网电子商务股份有限公司 Detection processing method for application starting state of android interactive screen
CN113033361A (en) * 2021-03-13 2021-06-25 杭州翔毅科技有限公司 Intelligent human-computer interaction system based on environment visibility
CN114332762B (en) * 2021-12-24 2025-04-25 杭州电子科技大学 Crowd anomaly detection method based on pedestrian group ring block feature extraction
CN114972111B (en) * 2022-06-16 2023-01-10 慧之安信息技术股份有限公司 A Dense Crowd Counting Method Based on GAN Image Inpainting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN102324016A (en) * 2011-05-27 2012-01-18 郝红卫 Statistical method for high-density crowd flow
CN103077423A (en) * 2011-10-25 2013-05-01 中国科学院深圳先进技术研究院 Crowd quantity estimating, local crowd clustering state and crowd running state detection method based on video stream
CN103745230A (en) * 2014-01-14 2014-04-23 四川大学 Adaptive abnormal crowd behavior analysis method
CN106156706A (en) * 2015-04-07 2016-11-23 中国科学院深圳先进技术研究院 Pedestrian's anomaly detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8189905B2 (en) * 2007-07-11 2012-05-29 Behavioral Recognition Systems, Inc. Cognitive model for a machine-learning engine in a video analysis system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN102324016A (en) * 2011-05-27 2012-01-18 郝红卫 Statistical method for high-density crowd flow
CN103077423A (en) * 2011-10-25 2013-05-01 中国科学院深圳先进技术研究院 Crowd quantity estimating, local crowd clustering state and crowd running state detection method based on video stream
CN103745230A (en) * 2014-01-14 2014-04-23 四川大学 Adaptive abnormal crowd behavior analysis method
CN106156706A (en) * 2015-04-07 2016-11-23 中国科学院深圳先进技术研究院 Pedestrian's anomaly detection method

Also Published As

Publication number Publication date
CN107330372A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107330372B (en) An Analysis Method for Video-Based Crowd Density and Abnormal Behavior Detection System
Kulchandani et al. Moving object detection: Review of recent research trends
Xia et al. Towards improving quality of video-based vehicle counting method for traffic flow estimation
CN103077423B (en) To run condition detection method based on crowd&#39;s quantity survey of video flowing, local crowd massing situation and crowd
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
Wang Real-time moving vehicle detection with cast shadow removal in video based on conditional random field
CN108647649B (en) A method for detecting abnormal behavior in video
CN103258193B (en) A kind of group abnormality Activity recognition method based on KOD energy feature
CN103745230B (en) Adaptive abnormal crowd behavior analysis method
Avgerinakis et al. Recognition of activities of daily living for smart home environments
Shobha et al. A review on video based vehicle detection, recognition and tracking
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN102156983A (en) Pattern recognition and target tracking based method for detecting abnormal pedestrian positions
CN108052859A (en) A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
CN106446922B (en) A Method for Analyzing Abnormal Behavior of Crowds
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN103279737A (en) Fight behavior detection method based on spatio-temporal interest point
Xu et al. Hierarchical activity discovery within spatio-temporal context for video anomaly detection
CN107564022A (en) Saliency detection method based on Bayesian Fusion
CN104835147A (en) Method for detecting crowded people flow in real time based on three-dimensional depth map data
CN100382600C (en) Moving Object Detection Method in Dynamic Scene
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
CN105023019A (en) Characteristic description method used for monitoring and automatically detecting group abnormity behavior through video
CN103400155A (en) Pornographic video detection method based on semi-supervised learning of images
Huang et al. Cost-sensitive sparse linear regression for crowd counting with imbalanced training data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant