CN104658008A - Personnel gathering detection method based on video images - Google Patents
Personnel gathering detection method based on video images Download PDFInfo
- Publication number
- CN104658008A CN104658008A CN201510012881.3A CN201510012881A CN104658008A CN 104658008 A CN104658008 A CN 104658008A CN 201510012881 A CN201510012881 A CN 201510012881A CN 104658008 A CN104658008 A CN 104658008A
- Authority
- CN
- China
- Prior art keywords
- image
- frame
- pixel
- difference
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a personnel gathering detection method based on video images. The personnel gathering detection method comprises the following steps: performing background study on a monitored region according to continuous video images so as to obtain a present static background image of the monitored region; performing per-pixel background difference operation and image segmentation operation on each of n continuous video frames selected from the video images, and performing accumulation operation on segmented binary images corresponding to the continuous video frames so as to obtain the matrix of the accumulated images; performing threshold segmentation on the matrix of the accumulated images so as to obtain segmented images; removing noise points in the segmented images, and filling holes so as to form a target image; performing pixel number statistics on communicated regions of which the pixel value is 1 in the target image, and setting the pixel number of the communicated regions as the area of the communicated regions; judging whether a personnel gathering region exists or not according to the area and the preset area threshold of the communicated regions in the target image. Through the adoption of the personnel gathering detection method disclosed by the invention, the calculation cost can be reduced, and the rapid detection of personnel gathering phenomena can be realized.
Description
Technical field
The present invention relates to camera technique, particularly a kind of gathering of people detection method based on video image.
Background technology
In video monitoring in the prior art, when occurring gathering of people state in monitoring scene, all can increase to some extent with control difficulty the managing risk of guarded region, needing to adopt the scheme different from normal condition, monitoring scene is managed.Because modern video supervisory system deployment amount is huge, number of cameras is numerous, finds personnel's clustering phenomena in all monitoring scenes, needs a large amount of manpowers to monitor all cameras for a long time, both labor intensive cost, also easily causes and fails to report.Therefore, by video analysis means, the discovery personnel clustering phenomena in automatic test and monitoring scene has just become the demand of intelligent monitor system.
In the moving object detection based on video in the prior art and analytical algorithm, be generally adopt the method for moving object detection and demographics to carry out personnel activity's analysis.The method is when carrying out gathering of people and detecting, and need all regions computing staff's density, counting yield is lower, counting process owing to blocking, the complicated personnel's motion artifacts of intersection, companion's row etc., cause the inaccurate of calculating.
Summary of the invention
In view of this, the invention provides a kind of gathering of people detection method based on video image, thus can the region of direct acquisition personnel gathering activity for a long time from continuous print video image, carry out identifying and reporting to the police in region as gathering of people, thus in less calculation cost situation, the quick detection of gathering of people phenomenon can be realized.
Technical scheme of the present invention is specifically achieved in that
Based on a gathering of people detection method for video image, the method comprises:
A, carry out the Background learning of guarded region according to continuous print video image, obtain the current background image BI of guarded region static state;
B, carry out by pixel background difference operation and image segmentation operations according to current background image BI to each frame of video in the n chosen from a video image successive video frames, after the segmentation corresponding to successive video frames, image carries out accumulation operations, obtains accumulated image matrix F SM
n;;
C, to accumulated image matrix F SM
ncarry out Threshold segmentation, obtain segmentation image FSE;
D, the noise removed in segmentation image FSE, filling cavity, forms target image FSEM;
E, pixel count statistics is carried out to the connected region that pixel value in target image FSEM is 1, the pixel count of connected region is set to the area of this connected region;
F, according to the area of each connected region in target image FSEM and default area threshold th
a, judge whether to there is gathering of people region.
Preferably, after described step F, the method also comprises further:
G, on the video images the region of the gathering of people detected to be identified.
Preferably, described steps A comprises:
The current frame image F of A1, acquisition monitor video
k, previous frame image F
k-1with previous frame image F
k-1corresponding background image B
k-1; Wherein, k is the frame number of current frame image;
A2, as k=1, by the 1st two field picture F
1as previous frame image F
k-1with background image B
k-1with; As k>1, according to obtained F
k, F
k-1and B
k-1, calculate front frame background difference BD
kwith inter-frame difference FD
k;
A3, according to front frame background difference BD
k, inter-frame difference FD
kwith background image B
k-1in the renewal coefficient of each pixel, by pixel to background image B
k-1upgrade, obtain current frame image F
kcorresponding background image B
k;
A4, when k be less than default initial background upgrade frame number time, using the image of the next frame of current frame image as current frame image, return and perform steps A 1; Otherwise, by current background image B
kas the current background image BI of guarded region static state.
Preferably, by frame background difference BD before formulae discovery as described below
kwith inter-frame difference FD
k:
BD
k=F
k-B
k-1
FD
k=|F
k-F
k-1|
Wherein, BD
kfor F
kwith B
k-1difference, FD
kfor F
kwith F
k-1the absolute value of difference.
Preferably, described by pixel to background image B
k-1carry out renewal to comprise:
Steps A 3a, according to inter-frame difference FD
kfirst threshold FTh with presetting, determines the renewal amount m of current pixel (x, y)
k(x, y);
Steps A 3b, works as BD
kwhen being greater than default Second Threshold BTh, according to the renewal amount m of current pixel
k(x, y) is to the current pixel B of background image
k-1(x, y) upgrades; Otherwise, current pixel is not upgraded, i.e. B
kthe numerical value of (x, y) equals B
k-1(x, y);
Steps A 3c, to background image B
k-1in each pixel perform above-mentioned steps A 3a and A3b by pixel.
Preferably, described steps A 3a can comprise:
As inter-frame difference FD
kwhen being greater than preset first threshold value FTh, by the renewal amount m of current pixel (x, y)
k(x, y) is set to 0; Otherwise, according to renewal coefficient k
k(x, y) calculates the renewal amount m of current pixel (x, y)
k(x, y);
Wherein, described (x, y) coordinate that is current pixel.
Preferably, according to the renewal amount m of formulae discovery current pixel (x, y) as described below
k(x, y):
m
k(x,y)=k
k(x,y)′BD
k(x,y)。
Preferably, described renewal coefficient k
k(x, y) is:
Preferably, described first threshold FTh is 2.
Preferably, described step B comprises:
Step B1, arranges accumulated image matrix F SM
k; Wherein, k is the kth frame in n successive video frames;
Step B2, according to the present frame F of current background image BI to video image
kcarry out, by pixel background difference operation, obtaining present frame F
kdifference image matrix B FD
k; Wherein, k is present frame F
kframe number;
Step B3, according to difference image matrix B FD
kcomputation of mean values μ and standard deviation sigma;
Step B4, for difference image matrix B FD
kin be greater than the pixel value of μ+σ, calculate optimum segmentation threshold value Th with maximum between-cluster variance (OTSU) criterion;
Step B5, according to described optimum segmentation threshold value T
hto difference image matrix B FD
kcarry out Iamge Segmentation, obtain present frame F
kpartition data matrix B FS
k;
Step B6, by described partition data matrix B FS
kbe added to former frame F
k-1accumulated image matrix F SM
k-1on, obtain present frame F
kaccumulated image matrix F SM
k;
Step B7, all sequentially performs described step B1 ~ B6 to each frame of video in a selected n successive video frames, obtains accumulated image matrix F SM
n.
Preferably, present frame F is obtained by formulae discovery as described below
kdifference image matrix B FD
k:
BFD
k=|F
k-BI|
Wherein, BFD
kfor F
kwith the absolute value of the difference of BI.
1 preferably, obtains average μ and standard deviation sigma by formulae discovery as described below:
Wherein, if difference image matrix B FD
kfor the matrix of the capable c row of r, BFD
k(i, j) represents image BFD
kthe pixel value of the i-th row jth row pixel.
Preferably, partition data matrix B FS is obtained by formulae discovery as described below
k:
Wherein, BFD
k(i, j) representing matrix BFD
kthe pixel value of the i-th row jth row pixel, BFS
k(i, j) representing matrix BFS
kthe numerical value of the i-th row jth column element.
Preferably, accumulated image matrix F SM is obtained by formulae discovery as described below
k:
FSM
k(i,j)=min(max(FSM
k-1(i,j)+BFS
k(i,j),0),255)
Wherein, FSM
k(i, j) representing matrix FSM
kthe numerical value of the i-th row jth row pixel, FSM
k-1(i, j) representing matrix FSM
k-1the numerical value of the i-th row jth row pixel.
Preferably, segmentation image FSE is obtained by formulae discovery as described below:
Wherein, the numerical value of the i-th row jth column element of FSE (i, j) representing matrix FSE, FSM
n(i, j) representing matrix FSM
nthe numerical value of the i-th row jth row pixel, described th
smfor the 3rd threshold value preset.
Preferably, described 3rd threshold value th
smvalue be 200.
Preferably, the noise in the removal segmentation image FSE in described step D comprises:
Use the square templates of 5 × 5 to carry out etching operation to segmentation image FSE, remove the noise in segmentation image FSE.
Preferably, the filling cavity in described step D comprises:
Use the square templates of 7 × 7 to carry out expansive working to the segmentation image FSE after removal noise, fill up the cavity in segmentation image FSE, form target image FSEM.
Preferably, there is gathering of people region and comprise in judging whether in described step F:
When the area of a connected region is greater than area threshold th
atime, then determine that this connected region is for personnel's aggregation zone.
Preferably, described area threshold th
avalue be 1000.
As above visible, due in the present invention based in the gathering of people detection method of video image, have employed the analysis of motion target area accumulation state in multiframe, thus can the region of direct acquisition personnel gathering activity for a long time from continuous print video image, carry out identifying and reporting to the police in region as gathering of people, thus in less calculation cost situation, the quick detection of gathering of people phenomenon can be realized.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the gathering of people detection method based on video image in the embodiment of the present invention.
Fig. 2 is the effect schematic diagram of the gathering of people detection method based on video image in the embodiment of the present invention.
Embodiment
For making object of the present invention, technical scheme and advantage clearly understand, to develop simultaneously embodiment referring to accompanying drawing, the present invention is described in more detail.
Present embodiments provide a kind of gathering of people detection method based on video image.
Fig. 1 is the schematic flow sheet of the gathering of people detection method based on video image in the embodiment of the present invention.As shown in Figure 1, the gathering of people detection method based on video image in the embodiment of the present invention can comprise step as described below:
Step 11, carries out the Background learning of guarded region according to continuous print video image, obtain the current background image BI of guarded region static state.
Step 12, carry out by pixel background difference operation and image segmentation operations according to current background image BI to each frame of video in the n chosen from a video image successive video frames, after the segmentation corresponding to successive video frames, image carries out accumulation operations, obtains accumulated image matrix F SM
n.Wherein, after segmentation, image is bianry image.
Step 13, to accumulated image matrix F SM
ncarry out Threshold segmentation, obtain segmentation image FSE.
Step 14, remove the noise in segmentation image FSE, filling cavity, forms target image FSEM.
Step 15, carries out pixel count statistics to the connected region that pixel value in target image FSEM is 1, the pixel count of connected region is set to the area of this connected region.
Step 16, according to area and the default area threshold th of each connected region in target image FSEM
a, judge whether to there is gathering of people region.
By above-mentioned step 11 ~ 16, region gathering of people being detected from the continuous videos image of video monitoring system can be realized.
Preferably, after above-mentioned steps 16, the described gathering of people detection method based on video image also can comprise further:
On the video images the region of the gathering of people detected is identified, realize the function that gathering of people detects.
In the inventive solutions, various ways can be used to realize above-mentioned step 11.Below by for a kind of embodiment wherein, technical scheme of the present invention is introduced.
Such as, preferably, in a particular embodiment of the present invention, described step 11 comprises:
Step 111, obtains the current frame image F of monitor video
k, previous frame image F
k-1with previous frame image F
k-1corresponding background image B
k-1; Wherein, k is the frame number of current frame image.
Step 112, as k=1, by the 1st two field picture F
1as previous frame image F
k-1with background image B
k-1with; As k>1, according to obtained F
k, F
k-1and B
k-1, calculate front frame background difference BD
kwith inter-frame difference FD
k.
Such as, in the preferred embodiment, front frame background difference BD can be calculated by formula as described below
kwith inter-frame difference FD
k:
BD
k=F
k-B
k-1(1)
FD
k=|F
k-F
k-1| (2)
Wherein, BD
kfor F
kwith B
k-1difference, FD
kfor F
kwith F
k-1the absolute value of difference.Described difference operation carries out subtraction calculations by the pixel value corresponding to two images by pixel to obtain, through above-mentioned by pixel difference operation after, difference image matrix B D can be obtained
kand FD
k.
Step 113, according to front frame background difference BD
k, inter-frame difference FD
kwith background image B
k-1in the renewal coefficient of each pixel, by pixel to background image B
k-1upgrade, obtain current frame image F
kcorresponding background image B
k.
Preferably, in a particular embodiment of the present invention, described by pixel to background image B
k-1carry out renewal can comprise:
Step 113a, according to inter-frame difference FD
kfirst threshold FTh with presetting, determines the renewal amount m of current pixel (x, y)
k(x, y).
Such as, in the preferred embodiment, described step 113a can comprise:
As inter-frame difference FD
kwhen being greater than preset first threshold value FTh, by the renewal amount m of current pixel (x, y)
k(x, y) is set to 0;
Otherwise, according to renewal coefficient k
k(x, y) calculates the renewal amount m of current pixel (x, y)
k(x, y).
Wherein, described (x, y) coordinate that is current pixel.
In addition, preferably, in a particular embodiment of the present invention, can according to the renewal amount m of formulae discovery current pixel (x, y) as described below
k(x, y):
m
k(x,y)=k
k(x,y)′BD
k(x,y) (3)
In the inventive solutions, renewal coefficient k can be pre-set with the actual conditions of monitoring scene as required
kthe value of (x, y).Such as, preferably, in a particular embodiment of the present invention, described renewal coefficient k
k(x, y) can be piecewise function as described below:
Preferably, in a particular embodiment of the present invention, described first threshold FTh can be set to 2.
Step 113b, works as BD
kwhen being greater than default Second Threshold BTh, according to the renewal amount m of current pixel
k(x, y) is to the current pixel B of background image
k-1(x, y) upgrades; Otherwise, current pixel is not upgraded, i.e. B
kthe numerical value of (x, y) equals B
k-1(x, y).
Preferably, in a particular embodiment of the present invention, can according to the formula as described below current pixel B to background image
k-1(x, y) upgrades:
B
k(x,y)=B
k-1(x,y)+m
k(x,y) (5)
Preferably, in a particular embodiment of the present invention, described Second Threshold BTh can be set to 2.
Step 113c, to background image B
k-1in each pixel perform above-mentioned step 113a and 113b by pixel.
Step 114, when k is less than default initial background renewal frame number, using the image of the next frame of current frame image as current frame image, returns and performs step 111; Otherwise, by current background image B
kas the current background image BI of guarded region static state.
By above-mentioned step 111 ~ 114, the current background image BI of guarded region static state can be obtained.
In addition, in the inventive solutions, various ways can be used to realize above-mentioned step 12.Below by for a kind of embodiment wherein, technical scheme of the present invention is introduced.
Such as, preferably, in a particular embodiment of the present invention, described step 12 can specifically comprise:
Step 121, arranges accumulated image matrix F SM
k; Wherein, k is the kth frame in n successive video frames.
In the inventive solutions, described image array FSM
kranks number consistent with video frame image ranks number.In addition, the initial value FSM of accumulated image matrix
0the value of whole elements be 0.
Step 122, according to the present frame F of current background image BI to video image
kcarry out, by pixel background difference operation, obtaining present frame F
kdifference image matrix B FD
k; Wherein, k is present frame F
kframe number.
Preferably, in a particular embodiment of the present invention, present frame F can be obtained by formulae discovery as described below
kdifference image matrix B FD
k:
BFD
k=|F
k-BI| (6)
Wherein, BFD
kfor F
kwith the absolute value of the difference of BI.Described background difference operation carries out subtraction calculations by the pixel value corresponding to two images by pixel to obtain, through above-mentioned by pixel background difference operation after, present frame F can be obtained
kdifference image matrix B FD
k.
Step 123, according to difference image matrix B FD
kcomputation of mean values μ and standard deviation sigma.
Preferably, in a particular embodiment of the present invention, average μ and standard deviation sigma can be obtained by formulae discovery as described below:
Wherein, if difference image matrix B FD
kfor the matrix of the capable c row of r, BFD
k(i, j) represents image BFD
kthe pixel value of the i-th row jth row pixel.
Step 124, for difference image matrix B FD
kin be greater than the pixel value of μ+σ, calculate optimum segmentation threshold value Th with maximum between-cluster variance (OTSU) criterion.
Step 125, according to described optimum segmentation threshold value T
hto difference image matrix B FD
kcarry out Iamge Segmentation, obtain present frame F
kpartition data matrix B FS
k.
Preferably, in a particular embodiment of the present invention, partition data matrix B FS can be obtained by formulae discovery as described below
k:
Wherein, BFD
k(i, j) representing matrix BFD
kthe pixel value of the i-th row jth row pixel, BFS
k(i, j) representing matrix BFS
kthe numerical value of the i-th row jth column element.
Step 126, by described partition data matrix B FS
kbe added to former frame F
k-1accumulated image matrix F SM
k-1on, obtain present frame F
kaccumulated image matrix F SM
k.
Preferably, in a particular embodiment of the present invention, accumulated image matrix F SM can be obtained by formulae discovery as described below
k:
FSM
k(i,j)=min(max(FSM
k-1(i,j)+BFS
k(i,j),0),255)(10)
Wherein, FSM
k(i, j) representing matrix FSM
kthe numerical value of the i-th row jth row pixel, FSM
k-1(i, j) representing matrix FSM
k-1the numerical value of the i-th row jth row pixel.
According to above formula, in above formula, by BFS
krespective element is added to matrix F SM
k-1, and compare with 0 and 255, by final numerical definiteness in [0,255] interval, matrix F SM can be formed
k.Function max (a, b) wherein will return the larger numerical value of a and b, and function min (a, b) will return the less numerical value of a and b.
Step 127, all sequentially performs above-mentioned step 121 ~ 126 to each frame of video in a selected n successive video frames, finally obtains accumulated image matrix F SM
n.
By above-mentioned step 121 ~ 127, accumulated image matrix F SM can be obtained
n.
In addition, preferably, in a particular embodiment of the present invention, segmentation image FSE can be obtained by formulae discovery as described below:
Wherein, the numerical value of the i-th row jth column element of FSE (i, j) representing matrix FSE, FSM
n(i, j) representing matrix FSM
nthe numerical value of the i-th row jth row pixel, described th
smfor the 3rd threshold value preset.
Preferably, in a particular embodiment of the present invention, described 3rd threshold value th
smvalue can be set to 200.
In addition, preferably, in a particular embodiment of the present invention, the noise in the removal segmentation image FSE in described step 14 can comprise:
Use the square templates of 5 × 5 to carry out etching operation to segmentation image FSE, remove the noise in segmentation image FSE.
In addition, preferably, in a particular embodiment of the present invention, the filling cavity in described step 14 can comprise:
Use the square templates of 7 × 7 to carry out expansive working to the segmentation image FSE after removal noise, fill up the cavity in segmentation image FSE, form target image FSEM.
In addition, preferably, in a particular embodiment of the present invention, there is gathering of people region and can comprise in judging whether in described step 16:
When the area of a connected region is greater than area threshold th
atime, then determine that this connected region is for personnel's aggregation zone.
Preferably, in a particular embodiment of the present invention, described area threshold th
avalue can pre-set according to practical situations.Such as, described area threshold th
avalue can be 1000.
Fig. 2 is the effect schematic diagram of the gathering of people detection method based on video image in the embodiment of the present invention.As shown in Figure 2, when the region, square of moving many to personnel is monitored, first can carry out Background learning to guarded region, thus obtain the current background image of guarded region static state; Then, moving object detection is carried out by background differential pair frame of video, obtain the motion target area in frame of video, as shown in Fig. 2 (b), and motion target area can be distinguished by different marks, such as, motion target area adopts 1 mark, and aimless background area adopts 0 mark, thus corresponding frame of video is made to become corresponding bianry image; Then, motion target area detection is carried out to continuous print frame of video, obtain the motion target area bianry image that each frame is corresponding, and add up, thus obtain video motion region accumulated image, as shown in Fig. 2 (c); Subsequently, Threshold segmentation can be carried out according to video motion region accumulated image matrix, form target image FSEM, as shown in Fig. 2 (d).On the basis of this target image FSEM, carry out region detection, obtain the area of each connected region, when the area of connected region is greater than default area threshold, can judge that this region exists gathering of people phenomenon, mark in video image, as shown in Fig. 2 (a), complete gathering of people measuring ability.
In summary, due in the present invention based in the gathering of people detection method of video image, have employed the analysis of motion target area accumulation state in multiframe, thus can the region of direct acquisition personnel gathering activity for a long time from continuous print video image, carry out identifying and reporting to the police in region as gathering of people, thus in less calculation cost situation, the quick detection of gathering of people phenomenon can be realized.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within the scope of protection of the invention.
Claims (10)
1., based on a gathering of people detection method for video image, it is characterized in that, the method comprises:
A, carry out the Background learning of guarded region according to continuous print video image, obtain the current background image BI of guarded region static state;
B, carry out by pixel background difference operation and image segmentation operations according to current background image BI to each frame of video in the n chosen from a video image successive video frames, after the segmentation corresponding to successive video frames, image carries out accumulation operations, obtains accumulated image matrix F SM
n;;
C, to accumulated image matrix F SM
ncarry out Threshold segmentation, obtain segmentation image FSE;
D, the noise removed in segmentation image FSE, filling cavity, forms target image FSEM;
E, pixel count statistics is carried out to the connected region that pixel value in target image FSEM is 1, the pixel count of connected region is set to the area of this connected region;
F, according to the area of each connected region in target image FSEM and default area threshold th
a, judge whether to there is gathering of people region.
2. method according to claim 1, is characterized in that, after described step F, the method also comprises further:
G, on the video images the region of the gathering of people detected to be identified.
3. method according to claim 1, is characterized in that, described steps A comprises:
The current frame image F of A1, acquisition monitor video
k, previous frame image F
k-1with previous frame image F
k-1corresponding background image B
k-1; Wherein, k is the frame number of current frame image;
A2, as k=1, by the 1st two field picture F
1as previous frame image F
k-1with background image B
k-1with; As k>1, according to obtained F
k, F
k-1and B
k-1, calculate front frame background difference BD
kwith inter-frame difference FD
k;
A3, according to front frame background difference BD
k, inter-frame difference FD
kwith background image B
k-1in the renewal coefficient of each pixel, by pixel to background image B
k-1upgrade, obtain current frame image F
kcorresponding background image B
k;
A4, when k be less than default initial background upgrade frame number time, using the image of the next frame of current frame image as current frame image, return and perform steps A 1; Otherwise, by current background image B
kas the current background image BI of guarded region static state.
4. method according to claim 3, is characterized in that, by frame background difference BD before formulae discovery as described below
kwith inter-frame difference FD
k:
BD
k=F
k-B
k-1
FD
k=|F
k-F
k-1|
Wherein, BD
kfor F
kwith B
k-1difference, FD
kfor F
kwith F
k-1the absolute value of difference.
5. method according to claim 3, is characterized in that, described by pixel to background image B
k-1carry out renewal to comprise:
Steps A 3a, according to inter-frame difference FD
kfirst threshold FTh with presetting, determines the renewal amount m of current pixel (x, y)
k(x, y);
Steps A 3b, works as BD
kwhen being greater than default Second Threshold BTh, according to the renewal amount m of current pixel
k(x, y) is to the current pixel B of background image
k-1(x, y) upgrades; Otherwise, current pixel is not upgraded, i.e. B
kthe numerical value of (x, y) equals B
k-1(x, y);
Steps A 3c, to background image B
k-1in each pixel perform above-mentioned steps A 3a and A3b by pixel.
6. method according to claim 5, is characterized in that, described steps A 3a can comprise:
As inter-frame difference FD
kwhen being greater than preset first threshold value FTh, by the renewal amount m of current pixel (x, y)
k(x, y) is set to 0; Otherwise, according to renewal coefficient k
k(x, y) calculates the renewal amount m of current pixel (x, y)
k(x, y);
Wherein, described (x, y) coordinate that is current pixel.
7. method according to claim 6, is characterized in that, according to the renewal amount m of formulae discovery current pixel (x, y) as described below
k(x, y):
m
k(x,y)=k
k(x,y)×BD
k(x,y)。
8. method according to claim 7, is characterized in that, described renewal coefficient k
k(x, y) is:
9. method according to claim 5, is characterized in that:
Described first threshold FTh is 2.
10. method according to claim 1, is characterized in that, described step B comprises:
Step B1, arranges accumulated image matrix F SM
k; Wherein, k is the kth frame in n successive video frames;
Step B2, according to the present frame F of current background image BI to video image
kcarry out, by pixel background difference operation, obtaining present frame F
kdifference image matrix B FD
k; Wherein, k is present frame F
kframe number;
Step B3, according to difference image matrix B FD
kcomputation of mean values μ and standard deviation sigma;
Step B4, for difference image matrix B FD
kin be greater than the pixel value of μ+σ, calculate optimum segmentation threshold value Th with maximum between-cluster variance (OTSU) criterion;
Step B5, according to described optimum segmentation threshold value T
hto difference image matrix B FD
kcarry out Iamge Segmentation, obtain present frame F
kpartition data matrix B FS
k;
Step B6, by described partition data matrix B FS
kbe added to former frame F
k-1accumulated image matrix F SM
k-1on, obtain present frame F
kaccumulated image matrix F SM
k;
Step B7, all sequentially performs described step B1 ~ B6 to each frame of video in a selected n successive video frames, obtains accumulated image matrix F SM
n.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510012881.3A CN104658008B (en) | 2015-01-09 | 2015-01-09 | A kind of gathering of people detection method based on video image |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510012881.3A CN104658008B (en) | 2015-01-09 | 2015-01-09 | A kind of gathering of people detection method based on video image |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104658008A true CN104658008A (en) | 2015-05-27 |
| CN104658008B CN104658008B (en) | 2017-09-12 |
Family
ID=53249084
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510012881.3A Active CN104658008B (en) | 2015-01-09 | 2015-01-09 | A kind of gathering of people detection method based on video image |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104658008B (en) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105550663A (en) * | 2016-01-07 | 2016-05-04 | 北京环境特性研究所 | Cinema attendance statistical method and system |
| CN106612385A (en) * | 2015-10-22 | 2017-05-03 | 株式会社理光 | Video detection method and video detection device |
| US10440239B1 (en) | 2018-10-01 | 2019-10-08 | Interra Systems | System and method for detecting presence of a living hold in a video stream |
| CN110929597A (en) * | 2019-11-06 | 2020-03-27 | 普联技术有限公司 | Image-based leaf filtering method and device and storage medium |
| CN111339945A (en) * | 2020-02-26 | 2020-06-26 | 贵州安防工程技术研究中心有限公司 | Video-based people group and scatter inspection method and system |
| CN111402232A (en) * | 2020-03-16 | 2020-07-10 | 深圳市瑞图生物技术有限公司 | Method for detecting sperm aggregation in semen |
| CN111667503A (en) * | 2020-06-12 | 2020-09-15 | 中国科学院长春光学精密机械与物理研究所 | Multi-target tracking method, device and equipment based on foreground detection and storage medium |
| CN111898524A (en) * | 2020-07-29 | 2020-11-06 | 江苏艾什顿科技有限公司 | 5G edge computing gateway and application thereof |
| CN112613456A (en) * | 2020-12-29 | 2021-04-06 | 四川中科朗星光电科技有限公司 | Small target detection method based on multi-frame differential image accumulation |
| CN114494350A (en) * | 2022-01-28 | 2022-05-13 | 北京中电兴发科技有限公司 | Personnel gathering detection method and device |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060195199A1 (en) * | 2003-10-21 | 2006-08-31 | Masahiro Iwasaki | Monitoring device |
| US20070195993A1 (en) * | 2006-02-22 | 2007-08-23 | Chao-Ho Chen | Method for video object segmentation |
| CN102364944A (en) * | 2011-11-22 | 2012-02-29 | 电子科技大学 | A video surveillance method for preventing people from gathering |
-
2015
- 2015-01-09 CN CN201510012881.3A patent/CN104658008B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060195199A1 (en) * | 2003-10-21 | 2006-08-31 | Masahiro Iwasaki | Monitoring device |
| US20070195993A1 (en) * | 2006-02-22 | 2007-08-23 | Chao-Ho Chen | Method for video object segmentation |
| CN102364944A (en) * | 2011-11-22 | 2012-02-29 | 电子科技大学 | A video surveillance method for preventing people from gathering |
Non-Patent Citations (3)
| Title |
|---|
| PETER H. TU ET AL.: "Crowd Segmentation Through Emergent Labeling", 《SMVP 2004, LNCS》 * |
| VENKATESH BALA SUBBURAMAN ET AL.: "Counting people in the crowd using a generic head detector", 《2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE》 * |
| 韩亚伟 等: "帧差累积和减背景相结合的运动对象分割方法", 《计算机工程与应用》 * |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106612385A (en) * | 2015-10-22 | 2017-05-03 | 株式会社理光 | Video detection method and video detection device |
| CN106612385B (en) * | 2015-10-22 | 2019-09-06 | 株式会社理光 | Video detecting method and video detecting device |
| CN105550663A (en) * | 2016-01-07 | 2016-05-04 | 北京环境特性研究所 | Cinema attendance statistical method and system |
| US10440239B1 (en) | 2018-10-01 | 2019-10-08 | Interra Systems | System and method for detecting presence of a living hold in a video stream |
| CN110929597A (en) * | 2019-11-06 | 2020-03-27 | 普联技术有限公司 | Image-based leaf filtering method and device and storage medium |
| CN111339945A (en) * | 2020-02-26 | 2020-06-26 | 贵州安防工程技术研究中心有限公司 | Video-based people group and scatter inspection method and system |
| CN111339945B (en) * | 2020-02-26 | 2023-03-31 | 贵州安防工程技术研究中心有限公司 | Video-based people group and scatter inspection method and system |
| CN111402232A (en) * | 2020-03-16 | 2020-07-10 | 深圳市瑞图生物技术有限公司 | Method for detecting sperm aggregation in semen |
| CN111667503A (en) * | 2020-06-12 | 2020-09-15 | 中国科学院长春光学精密机械与物理研究所 | Multi-target tracking method, device and equipment based on foreground detection and storage medium |
| CN111898524A (en) * | 2020-07-29 | 2020-11-06 | 江苏艾什顿科技有限公司 | 5G edge computing gateway and application thereof |
| CN112613456A (en) * | 2020-12-29 | 2021-04-06 | 四川中科朗星光电科技有限公司 | Small target detection method based on multi-frame differential image accumulation |
| CN114494350A (en) * | 2022-01-28 | 2022-05-13 | 北京中电兴发科技有限公司 | Personnel gathering detection method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104658008B (en) | 2017-09-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104658008A (en) | Personnel gathering detection method based on video images | |
| CN109887281B (en) | Method and system for monitoring traffic incident | |
| CN102509083B (en) | Detection method for body conflict event | |
| CN110889328B (en) | Method, device, electronic equipment and storage medium for detecting road traffic condition | |
| CN105744232A (en) | Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology | |
| KR102217253B1 (en) | Apparatus and method for analyzing behavior pattern | |
| CN104301697A (en) | An automatic detection system and method for violent incidents in public places | |
| CN111325048B (en) | A method and device for detecting people gathering | |
| CN101266689A (en) | A mobile target detection method and device | |
| CN103198296A (en) | Method and device of video abnormal behavior detection based on Bayes surprise degree calculation | |
| CN104811586A (en) | Scene change video intelligent analyzing method, device, network camera and monitoring system | |
| CN103729858A (en) | Method for detecting article left over in video monitoring system | |
| CN110255318B (en) | Method for detecting idle articles in elevator car based on image semantic segmentation | |
| CN107610393A (en) | A kind of intelligent office monitoring system | |
| CN112883768A (en) | Object counting method and device, equipment and storage medium | |
| CN109867186A (en) | A kind of elevator malfunction detection method and system based on intelligent video analysis technology | |
| CN103679690A (en) | Object detection method based on segmentation background learning | |
| CN115719464A (en) | Water meter durability device water leakage monitoring method based on machine vision | |
| CN119290087A (en) | A river flow monitoring method and system integrating multi-source data | |
| CN102542673A (en) | Automatic teller machine (ATM) pre-warning method and system based on computer vision | |
| CN117372629A (en) | A digital twin-based reservoir visual data supervision and control system and method | |
| CN103489202A (en) | Intrusion detection method based on videos | |
| CN105741503B (en) | A kind of parking lot real time early warning method under existing monitoring device | |
| CN104077571A (en) | Method for detecting abnormal behavior of throng by adopting single-class serialization model | |
| CN105262984B (en) | A kind of detector with fixing device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |