[go: up one dir, main page]

CN106203276A - A kind of video passenger flow statistical system and passenger flow statistical method - Google Patents

A kind of video passenger flow statistical system and passenger flow statistical method Download PDF

Info

Publication number
CN106203276A
CN106203276A CN201610501924.9A CN201610501924A CN106203276A CN 106203276 A CN106203276 A CN 106203276A CN 201610501924 A CN201610501924 A CN 201610501924A CN 106203276 A CN106203276 A CN 106203276A
Authority
CN
China
Prior art keywords
background
image
passenger flow
flow statistical
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610501924.9A
Other languages
Chinese (zh)
Inventor
陈长宝
杜红民
侯长生
孔晓阳
王茹川
郭振强
多华娥
王磊
王莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central Plains Wisdom Urban Design Research Institute Co Ltd
Original Assignee
Central Plains Wisdom Urban Design Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central Plains Wisdom Urban Design Research Institute Co Ltd filed Critical Central Plains Wisdom Urban Design Research Institute Co Ltd
Priority to CN201610501924.9A priority Critical patent/CN106203276A/en
Publication of CN106203276A publication Critical patent/CN106203276A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of video passenger flow statistical system and passenger flow statistical method, and this system includes sequence frame image acquisition unit, moving object detection unit, Objective extraction unit and counting identifying unit;The passenger flow statistical method of this system includes: by photographic head acquisition decoding camera shooting head video flowing the sequence frame image being converted to HSV form;To the first two field picture according to pixel dependency spatially and temporal dependency, application VIBE algorithm sets up image background model, and compare background frames and present frame carries out background/foreground classification, use context update policy update background model, it is thus achieved that motion target area simultaneously;Motion target area is split, obtains the rectangle frame of motion target area, and the upper and lower border of rectangle frame is set, the most upper and lower border of each rectangle frame is tracked;The sequencing that upper and lower border according to moving target and counting line collide determines that moving target is to get on the bus or get off, and adds up the number of getting on or off the bus respectively.

Description

A kind of video passenger flow statistical system and passenger flow statistical method
Technical field
The present invention relates to video passenger flow statistics field, specifically, relate to a kind of video passenger flow statistical system and passenger flow Statistical method.
Background technology
Traditional passenger flow statistics generally uses the method for artificial statistics to obtain volume of the flow of passengers data, although precision can meet Requirement, but consume manpower, financial resources, and do not possess systemic and comprehensive.The contact equipment such as the POS, slot machine, although Departing from manually, but typically can only once pass through a people, and cannot meet higher precision and simultaneously statistics get on and off number Requirement.And present most widely used infrared detection system, this type of system is contactless, compares contact equipment, has The biggest raising.But in actual motion, when multiple passengers are consecutively or simultaneously by infrared facility, arise that and block, Single passenger is in the temporary transient stop in detection target area, or body and the interference of belongings, also can be to the precision of statistics Produce impact.Therefore, infrared system is only as low accuracy systems, it is provided that rough demographics data.But, bus is taken advantage of Guest's number is important bus passenger flow information, and passenger on public transport number information can assist the tune formulating more science accurately Metric is drawn, and is the foundation of manager's decision-making, is the standard evaluating quantity of operation.So along with Computer Vision and the technology of identification Fast development, Computer Vision and identification technology are progressively applied to demographics field.
In order to solve with present on problem, people are seeking a kind of preferably technical solution always.
Summary of the invention
It is an object of the invention to for the deficiencies in the prior art, thus a kind of video passenger flow statistical system is provided, and provide The simple passenger flow statistical method the most accurately of algorithm of based on this system.
To achieve these goals, the technical solution adopted in the present invention is: a kind of video passenger flow statistical system, this system Including: sequence frame image acquisition unit, connect photographic head, acquisition decoding camera shooting head video flowing are also converted to HSV form;Fortune Moving-target detector unit, for the HSV format-pattern obtained, application VIBE algorithm is set up image background model, and is compared background Frame and present frame carry out background/foreground classification, use context update policy update background model, it is thus achieved that moving target district simultaneously Territory;Objective extraction unit, splits motion target area, obtains the rectangle frame of motion target area, and arranges rectangle frame Upper and lower border, the most upper and lower border of each rectangle frame is tracked;Counting identifying unit, according to moving target The sequencing that collides of upper and lower border and counting line determine that moving target is to get on the bus or get off, respectively the number of getting on or off the bus is entered Row statistics.
A kind of passenger flow statistical method of video passenger flow statistical system, the method comprises the following steps:
1) by photographic head acquisition decoding camera shooting head video flowing the sequence frame image being converted to HSV form;
2) the first two field picture is set up according to pixel dependency spatially and temporal dependency, application VIBE algorithm Image background model, and compare background frames and present frame carries out background/foreground classification, use context update policy update simultaneously Background model, it is thus achieved that motion target area;
3) motion target area is split, obtain the rectangle frame of motion target area, and the upper and lower limit of rectangle frame is set Boundary, is tracked the upper and lower border of each rectangle frame the most respectively;
4) sequencing collided according to the upper and lower border of moving target and counting line determines that moving target is to get on the bus or get off, Respectively the number of getting on or off the bus is added up.
Based on above-mentioned, in described step 1), photographic head is installed vertically on directly over the compartment near bus door, obtains Take the video image of the upper and lower bus of passenger.
Based on above-mentioned, in step 2, moving target and background segment flow process are: first image is carried out Gaussian smoothing and locate in advance Reason, then extracts the SILTP value of three frame image sequence and asks for the distance between adjacent two frames respectively.To the first two field picture Application VIBE background modeling, for present frame, sets up a pixel ensemble space, calculates present frame each pixel Pixel and the Euclidean distance of background model, calculate multilevel iudge background frames and present frame by formula the following;
SR(pt(x, y))=and p | EuclidDis (p, pt(x, y)) < R}
Count=#{SR(pt(x,y)) ∩ Bt0(x,y)}
Based on above-mentioned, background model updates and includes: initialization background, and the meansigma methods of n two field picture before using, as the original back of the body Scape image F;Calculate the Structural VAR of current frame image and initial background image;According to structural similarity, carry out background more Newly.
Based on above-mentioned, calculate the concrete operation step bag of current frame image and the Structural VAR of initial background image Include:
1) current frame image I is calculatedtAnd the luminance distortion S between initial background image Fm(F,It) and contrast distortion Sv(F, It):
Wherein, μ1And μ2It is F and I respectivelytRegional average value, σ1And σ2It is F and I respectivelytLocal Deviation, σ1,2It is F and ItIt Between region covariance, c1And c2For constant;
2) structure similar diagram M (F, the I between current frame image It and initial background image F is determinedt):
M(F,It)=Sm(F,It)×Sv(F,It) ;
3) according to structure similar diagram M (F, the I of step (2) gainedt), calculate current frame image ItWith initial background image F it Between structural similarity mt:
mt=(M (F, It))γ, wherein, γ is constant;
Based on structure similar diagram, the concrete operation step carrying out context update includes:
A) according to the structural similarity Coefficient m obtainedt, calculate the feedback factor d of moving regiont: dt=(1-α) dt-1+α(1- mt), wherein, dt-1For t-1 moment moving region feedback factor, α is learning rate;
B) the feedback factor β in current t moment is calculatedt:
C) update background, determine background B in current t momentt: Bt=(1-βt×α) Bt-1+βt×α×It, wherein, Bt-1For The background in t-1 moment.
Hinge structure of the present invention has prominent substantive distinguishing features and significantly progress, and specifically, the present invention has There is higher detection discrimination, and computation complexity is low, there is wide application scenarios.Compared with existing counting, by mesh The detection in mark region, carries out passenger flow statistics, and compared with existing algorithm, theoretical foundation is abundant, and mathematical model is clear, it is achieved simple, Accuracy is high.
According to the strong robustness to illumination variation of the object structures in video scene, utilize the knot of present frame and background model Structure similarity coefficient, suppresses the prospect impact on background model, solves the renewal background model mistake that existing background modeling method exists Journey is easily introduced a difficult problem for foreground features.
Detailed description of the invention
Below by detailed description of the invention, technical scheme is described in further detail.
Embodiment 1
A kind of video passenger flow statistical system, this system includes:
Sequence frame image acquisition unit, connects photographic head, and acquisition decoding camera shooting head video flowing are also converted to HSV form, specifically , photographic head is installed vertically on directly over the compartment near bus door, obtains the video image of the upper and lower bus of passenger; Moving object detection unit, for the HSV format-pattern obtained, application VIBE algorithm is set up image background model, and is compared the back of the body Scape frame and present frame carry out background/foreground classification, use context update policy update background model, it is thus achieved that moving target simultaneously Region;Objective extraction unit, splits motion target area, obtains the rectangle frame of motion target area, and arranges rectangle The upper and lower border of frame, is tracked the upper and lower border of each rectangle frame the most respectively;Counting identifying unit, according to motion mesh The sequencing that the upper and lower border of target and counting line collide determines that moving target is to get on the bus or get off, respectively to the number of getting on or off the bus Add up.
The passenger flow statistical method of described video passenger flow statistical system comprises the following steps:
1) by photographic head acquisition decoding camera shooting head video flowing the sequence frame image being converted to HSV form;
2) the first two field picture is set up according to pixel dependency spatially and temporal dependency, application VIBE algorithm Image background model, and compare background frames and present frame carries out background/foreground classification, use context update policy update simultaneously Background model, it is thus achieved that motion target area;
3) motion target area is split, obtain the rectangle frame of motion target area, and the upper and lower limit of rectangle frame is set Boundary, is tracked the upper and lower border of each rectangle frame the most respectively;
4) sequencing collided according to the upper and lower border of moving target and counting line determines that moving target is to get on the bus or get off, Respectively the number of getting on or off the bus is added up.
Embodiment 2
System, in running, first generates background according to the current view data of input;Only carry out when closed door The renewal of background, just carries out passenger flow counting when opening car door;During concrete image procossing, need moving target and Background is split, and flow process is: first image is carried out Gaussian smoothing pretreatment, then extracts the SILTP of three frame image sequence Value and ask for the distance between adjacent two frames respectively.To the first two field picture application VIBE background modeling, for present frame, right Each pixel sets up a pixel ensemble space, calculates the Euclidean distance of current frame pixel point and background model, Multilevel iudge background frames and present frame is calculated by formula the following.
SR(pt(x, y))=and p | EuclidDis (p, pt(x, y)) < R}
Count=#{SR(pt(x,y)) ∩ Bt0(x,y)}
Embodiment 3
The present embodiment is with the difference of above-described embodiment: after setting up image background model, needs to use the effective back of the body Scape updates policy update background model, it is thus achieved that motion target area, updates in background model and includes: initialization background, n before using The meansigma methods of two field picture, as original background image F;Calculate the Structural VAR of current frame image and initial background image; According to structural similarity, carry out context update.
Calculating current frame image includes with the concrete operation step of the Structural VAR of initial background image:
1) current frame image I is calculatedtAnd the luminance distortion S between initial background image Fm(F,It) and contrast distortion Sv(F, It):
Wherein, μ1And μ2It is F and I respectivelytRegional average value, σ1And σ2It is F and I respectivelytLocal Deviation, σ1,2It is F and ItIt Between region covariance, c1And c2For constant;
2) structure similar diagram M (F, the I between current frame image It and initial background image F is determinedt):
M(F,It)=Sm(F,It)×Sv(F,It) ;
3) according to structure similar diagram M (F, the I of step (2) gainedt), calculate current frame image ItWith initial background image F it Between structural similarity mt:
mt=(M (F, It))γ, wherein, γ is constant;
Based on structure similar diagram, the concrete operation step carrying out context update includes:
A) according to the structural similarity Coefficient m obtainedt, calculate the feedback factor d of moving regiont: dt=(1-α) dt-1+α(1- mt), wherein, dt-1For t-1 moment moving region feedback factor, α is learning rate;
B) the feedback factor β in current t moment is calculatedt:
C) update background, determine background B in current t momentt: Bt=(1-βt×α) Bt-1+βt×α×It, wherein, Bt-1For The background in t-1 moment.
Finally should be noted that: above example is only in order to illustrate that technical scheme is not intended to limit;To the greatest extent The present invention has been described in detail by pipe with reference to preferred embodiment, and those of ordinary skill in the field are it is understood that still The detailed description of the invention of the present invention can be modified or portion of techniques feature is carried out equivalent;Without deviating from this The spirit of bright technical scheme, it all should be contained in the middle of the technical scheme scope that the present invention is claimed.

Claims (6)

1. a video passenger flow statistical system, it is characterised in that this system includes:
Sequence frame image acquisition unit, connects photographic head, and acquisition decoding camera shooting head video flowing are also converted to HSV form;
Moving object detection unit, for the HSV format-pattern obtained, application VIBE algorithm is set up image background model, and is compared Relatively background frames and present frame carry out background/foreground classification, use context update policy update background model, it is thus achieved that motion simultaneously Target area;
Objective extraction unit, splits motion target area, obtains the rectangle frame of motion target area, and arranges rectangle frame Upper and lower border, the most upper and lower border of each rectangle frame is tracked;
According to the sequencing that upper and lower border and the counting line of moving target collide, counting identifying unit, determines that moving target is Get on the bus or get off, respectively the number of getting on or off the bus is added up.
2. the passenger flow statistical method of the video passenger flow statistical system described in a claim 1, it is characterised in that: the method includes Following steps:
1) by photographic head acquisition decoding camera shooting head video flowing the sequence frame image being converted to HSV form;
2) the first two field picture is set up according to pixel dependency spatially and temporal dependency, application VIBE algorithm Image background model, and compare background frames and present frame carries out background/foreground classification, use context update policy update simultaneously Background model, it is thus achieved that motion target area;
3) motion target area is split, obtain the rectangle frame of motion target area, and the upper and lower limit of rectangle frame is set Boundary, is tracked the upper and lower border of each rectangle frame the most respectively;
4) sequencing collided according to the upper and lower border of moving target and counting line determines that moving target is to get on the bus or get off, Respectively the number of getting on or off the bus is added up.
Video image passenger flow statistical method the most according to claim 2, it is characterised in that: in described step 1), will shooting Head is installed vertically on directly over the compartment near bus door, obtains the video image of the upper and lower bus of passenger.
Video image passenger flow statistical method the most according to claim 2, it is characterised in that moving target and the back of the body in step 2 Scape segmentation flow process be: first image is carried out Gaussian smoothing pretreatment, then extract three frame image sequence SILTP value and Ask for the distance between adjacent two frames respectively;To the first two field picture application VIBE background modeling, for present frame, to each Pixel sets up a pixel ensemble space, calculates the Euclidean distance of current frame pixel point and background model, with following Formula calculates multilevel iudge background frames and present frame;
SR(pt(x, y))=and p | EuclidDis (p, pt(x, y)) < R}
Count=#{SR(pt(x,y)) ∩ Bt0(x,y)}
Video image passenger flow statistical method the most according to claim 4, it is characterised in that background model updates and includes: the back of the body Scape initializes, and the meansigma methods of n two field picture before using, as original background image F;Calculate current frame image and initial background image Structural VAR;According to structural similarity, carry out context update.
Video image passenger flow statistical method the most according to claim 5, it is characterised in that
Calculating current frame image includes with the concrete operation step of the Structural VAR of initial background image:
1) current frame image I is calculatedtAnd the luminance distortion S between initial background image Fm(F,It) and contrast distortion Sv(F, It):
Wherein, μ1And μ2It is F and I respectivelytRegional average value, σ1And σ2It is F and I respectivelytLocal Deviation, σ1,2It is F and ItBetween Region covariance, c1And c2For constant;
2) structure similar diagram M (F, the I between current frame image It and initial background image F is determinedt):
M(F,It)=Sm(F,It)×Sv(F,It) ;
3) according to structure similar diagram M (F, the I of step (2) gainedt), calculate current frame image ItAnd between initial background image F Structural similarity mt:
mt=(M (F, It))γ, wherein, γ is constant;
Based on structure similar diagram, the concrete operation step carrying out context update includes:
A) according to the structural similarity Coefficient m obtainedt, calculate the feedback factor d of moving regiont: dt=(1-α) dt-1+α(1- mt), wherein, dt-1For t-1 moment moving region feedback factor, α is learning rate;
B) the feedback factor β in current t moment is calculatedt:
C) update background, determine background B in current t momentt: Bt=(1-βt×α) Bt-1+βt×α×It, wherein, Bt-1For The background in t-1 moment.
CN201610501924.9A 2016-06-30 2016-06-30 A kind of video passenger flow statistical system and passenger flow statistical method Pending CN106203276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610501924.9A CN106203276A (en) 2016-06-30 2016-06-30 A kind of video passenger flow statistical system and passenger flow statistical method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610501924.9A CN106203276A (en) 2016-06-30 2016-06-30 A kind of video passenger flow statistical system and passenger flow statistical method

Publications (1)

Publication Number Publication Date
CN106203276A true CN106203276A (en) 2016-12-07

Family

ID=57462546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610501924.9A Pending CN106203276A (en) 2016-06-30 2016-06-30 A kind of video passenger flow statistical system and passenger flow statistical method

Country Status (1)

Country Link
CN (1) CN106203276A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845620A (en) * 2016-12-19 2017-06-13 江苏慧眼数据科技股份有限公司 A kind of passenger flow counting method based on quene state analysis
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN108038865A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of public transport video passenger flow statistical method
CN108960052A (en) * 2018-05-28 2018-12-07 南京邮电大学 Ship overload detecting method based on video flowing
CN110264422A (en) * 2019-06-14 2019-09-20 西安电子科技大学 The optical image security method of optical flicker pixel is eliminated based on ViBe model
CN110443100A (en) * 2018-05-04 2019-11-12 郑州宇通客车股份有限公司 A kind of passenger flow statistical method, passenger flow statistical system and school bus
CN110969131A (en) * 2019-12-04 2020-04-07 大连理工大学 Subway people flow counting method based on scene flow
CN114926422A (en) * 2022-05-11 2022-08-19 西南交通大学 Method and system for detecting boarding and alighting passenger flow

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040258307A1 (en) * 2003-06-17 2004-12-23 Viola Paul A. Detecting pedestrians using patterns of motion and apprearance in videos
CN102750710A (en) * 2012-05-31 2012-10-24 信帧电子技术(北京)有限公司 Method and device for counting motion targets in images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040258307A1 (en) * 2003-06-17 2004-12-23 Viola Paul A. Detecting pedestrians using patterns of motion and apprearance in videos
CN102750710A (en) * 2012-05-31 2012-10-24 信帧电子技术(北京)有限公司 Method and device for counting motion targets in images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YONG LUO 等: ""Motion objects segmentation based on structural similarity background modelling"", 《IET COMPUTER VISION》 *
崔莫磊: ""公交视频人数统计系统的设计与开发"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵龙贺: ""基于多特征融合的目标检测与跟踪算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845620A (en) * 2016-12-19 2017-06-13 江苏慧眼数据科技股份有限公司 A kind of passenger flow counting method based on quene state analysis
CN106845620B (en) * 2016-12-19 2019-09-10 江苏慧眼数据科技股份有限公司 A kind of passenger flow counting method based on quene state analysis
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN108038865A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of public transport video passenger flow statistical method
CN110443100A (en) * 2018-05-04 2019-11-12 郑州宇通客车股份有限公司 A kind of passenger flow statistical method, passenger flow statistical system and school bus
CN108960052A (en) * 2018-05-28 2018-12-07 南京邮电大学 Ship overload detecting method based on video flowing
CN110264422A (en) * 2019-06-14 2019-09-20 西安电子科技大学 The optical image security method of optical flicker pixel is eliminated based on ViBe model
CN110969131A (en) * 2019-12-04 2020-04-07 大连理工大学 Subway people flow counting method based on scene flow
CN114926422A (en) * 2022-05-11 2022-08-19 西南交通大学 Method and system for detecting boarding and alighting passenger flow

Similar Documents

Publication Publication Date Title
CN106203276A (en) A kind of video passenger flow statistical system and passenger flow statistical method
CN110717414A (en) Target detection tracking method, device and equipment
CN103488993B (en) A kind of crowd's abnormal behaviour recognition methods based on FAST
CN102663429B (en) Method for motion pattern classification and action recognition of moving target
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN108875588A (en) Across camera pedestrian detection tracking based on deep learning
CN103871082A (en) Method for counting people stream based on security and protection video image
CN106203513B (en) Statistical method based on pedestrian head and shoulder multi-target detection and tracking
CN102982598A (en) Video people counting method and system based on single camera scene configuration
CN103268470B (en) Object video real-time statistical method based on any scene
CN105303191A (en) Method and apparatus for counting pedestrians in foresight monitoring scene
CN109101888A (en) A kind of tourist's flow of the people monitoring and early warning method
CN111369596A (en) A method of escalator passenger flow statistics based on video surveillance
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN107491720A (en) A kind of model recognizing method based on modified convolutional neural networks
CN109325404A (en) A method of counting people in a bus scene
CN107103279A (en) A kind of passenger flow counting method under vertical angle of view based on deep learning
CN103605971A (en) Method and device for capturing face images
CN106326924A (en) Object tracking method and object tracking system based on local classification
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN106778637B (en) Statistical method for man and woman passenger flow
CN104599291B (en) Infrared motion target detection method based on structural similarity and significance analysis
CN103793477A (en) System and method for video abstract generation
CN107358163A (en) Visitor's line trace statistical method, electronic equipment and storage medium based on recognition of face
CN104217442B (en) Aerial video moving object detection method based on multiple model estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161207