[go: up one dir, main page]

CN1984236A - Method for collecting characteristics in telecommunication flow information video detection - Google Patents

Method for collecting characteristics in telecommunication flow information video detection Download PDF

Info

Publication number
CN1984236A
CN1984236A CNA2005100620043A CN200510062004A CN1984236A CN 1984236 A CN1984236 A CN 1984236A CN A2005100620043 A CNA2005100620043 A CN A2005100620043A CN 200510062004 A CN200510062004 A CN 200510062004A CN 1984236 A CN1984236 A CN 1984236A
Authority
CN
China
Prior art keywords
value
image
pixel
gaussian
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2005100620043A
Other languages
Chinese (zh)
Other versions
CN100502463C (en
Inventor
赵燕伟
胡峰俊
董红召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CNB2005100620043A priority Critical patent/CN100502463C/en
Publication of CN1984236A publication Critical patent/CN1984236A/en
Application granted granted Critical
Publication of CN100502463C publication Critical patent/CN100502463C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种交通流信息视频检测中的特征采集方法,该检测系统包括摄像机、信号处理器,使用改进的混合高斯分布模型来表征图像帧中每一个像素点的特征,表征的特征只采用亮度特征,如果没有运动目标(车辆)存在,则视频图像相对静止,每一像素点随时间变化都服从一定的统计模型,该算法中每一个像素点由K个高斯分布的混合模型来表征。当获得新的图像帧时,更新混合高斯分布模型,如果当前图像的像素点与混合高斯分布模型相匹配,则判定该点为背景点,否则判定该点为前景点。将建立的背景模型与当前图像进行绝对值差分,经过处理得到比较精确的车辆轮廓,和需要跟踪的参数。本发明能够适应环境变化、实时性强、处理速度快。

Figure 200510062004

A feature collection method in video detection of traffic flow information. The detection system includes a camera and a signal processor, and uses an improved mixed Gaussian distribution model to characterize the feature of each pixel in an image frame. The feature of the characterization is only the brightness feature. If there is no moving target (vehicle), the video image is relatively static, and each pixel is subject to a certain statistical model over time. In this algorithm, each pixel is represented by a mixture model of K Gaussian distributions. When a new image frame is obtained, the mixed Gaussian distribution model is updated. If the pixel point of the current image matches the mixed Gaussian distribution model, it is determined that the point is a background point, otherwise it is determined that the point is a foreground point. Absolute difference between the established background model and the current image is processed to obtain a more accurate vehicle outline and parameters to be tracked. The invention can adapt to environmental changes, has strong real-time performance and fast processing speed.

Figure 200510062004

Description

Method for collecting characteristics in a kind of telecommunication flow information Video Detection
(1) technical field
The present invention relates to a kind of telecommunication flow information video detecting method, the method for collecting characteristics in especially a kind of Video Detection.
(2) background technology
The population in city and vehicle are also in sharp increase, the magnitude of traffic flow strengthens day by day, congested in traffic clogging is on the rise, traffic system is faced with immense pressure, traffic problems have become the subject matter in the city management work, hinder and restricting the development of urban economy construction, become global common issue with gradually.In the face of serious day by day traffic problems, we can not rely on merely, as build and reconstruct road and adopt measures such as signal lamp control to alleviate this situation.
The telecommunication flow information collection is an important step in the intelligent transportation system, and the information of collection comprises vehicle flowrate, the speed of a motor vehicle, vehicle classification, road occupying rate, traffic density, and vehicle queue length, turn inside diameter, vehicle stops or the information of the situation of causing trouble.Since 1970, expert, scholar have developed a lot of traffic information collection equipment both at home and abroad, as velocity radar, annular magnetic test coil, supersonic detector, traffic microwave detector etc.Practical application shows that these several traffic information collection modes have following shortcoming: (1) accuracy of detection and reliability are not high; (2) be not suitable for detecting on a large scale; (3) amount of traffic information of obtaining is less; (4) can't show that vehicle, car plate and traffic scene etc. punish vital information for traffic study and analysis, traffic.Therefore owing to be subjected to the restriction of aspects such as detection range, detectability and reliability, traditional wagon detector can not satisfy the requirement of present traffic system.
Early stage most of video detection technology is to adopt the virtual coil method, as AUTOSCOPE, CCATS, TAS, IMPACTS, TrafficCam etc., its operation principle is similar to buried coil checker, and automobile video frequency tracking at present is the pixel that meets vehicle characteristics in the traffic scene image by identifying, and carries out image segmentation, and the vehicle in the frame before and after mating according to the feature that extracts, thereby calculate traffic parameter.The problem of signature tracking is that vehicle can not guarantee identical in the feature of highway section diverse location because image is subjected to the influence of surrounding environment (as building shade, street lamp).From image sequence, detect movable information, the recognition and tracking moving target is most important and the technology of most critical, what adopt at present is outstanding target or the thought of eliminating background, and inter-frame difference, background difference and three kinds of methods of optical flow method are roughly arranged.
Inter-frame difference, this method has very strong adaptivity, but the choosing the right moment of successive frame of doing difference had relatively high expectations, and depends on the movement velocity of moving object, if movement velocity is very fast and cross all not all right slowly.The calculating of optical flow method is very complicated, if there is not hardware to help, is difficult to satisfy the requirement of system real time.People such as P.Bouthem and D.Murray has also adopted this analysis means to cut apart motion.
Though it is fairly simple that the background difference on the ordinary meaning implements, adaptive ability is relatively poor, and some dynamic variations with some interference can not avoid.The effect that background is eliminated is most important to the realization of whole system.Forefathers have proposed the lot of background elimination algorithm: wherein have based on forecast method, as kalman filter method, wiener filter approaches etc., but this type of algorithm is not considered the application of depth information.Harville[10] etc. the adaptive background elimination algorithm that proposes of people based on mixture gaussian modelling, consider the adaptivity of the degree of depth, colouring information and time, improved the segmentation effect of system, but the operand of algorithm is big, real-time is poor.
(3) summary of the invention
For the deficiency that image is affected by environment easily in the signature tracking that overcomes prior art traffic flow video detection method, real-time is poor, processing speed is slower, the invention provides the method for collecting characteristics in a kind of variation that conforms, real-time, the telecommunication flow information Video Detection that processing speed is fast.
The technical solution adopted for the present invention to solve the technical problems is:
Method for collecting characteristics in the one sharp telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, video camera inputted video image sequence be 1,2 ..., t ..., promptly in the t frame video image, handle i pixel X I, t[R I, t, G I, t, B I, t] value, the probability density function of k Gaussian Profile is following formula (1):
Figure A20051006200400071
The probability calculation formula of current pixel point i is formula (2):
p ( X i , t ) Σ i = 1 k ω i , t - 1 , k * η k ( x t , μ i , t - 1 , Σ i , t - 1 , k ) - - - ( 2 )
This method may further comprise the steps:
(1), the video image of acquisition camera, obtain R, G, B color space image sequence, with medium filtering image filtering is removed and makes an uproar;
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3;
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises global context threshold value T, learning rate α, Gaussian distribution model number K, initializes weights ω;
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is set up single Gaussian mixture model-universal background model for predetermined empirical value;
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|X i,tk|<2.5σ k (3);
(5.1) if coupling is upgraded the parameter and the weight of k mixed Gauss model, parameter comprises expectation, variance, referring to following formula (4), (5), (6):
μ t=(1-ρ)μ t-1+ρX t (4)
σ t 2=(1-ρ)σ t-1 2+ρ(X tt) T(X tt) (5)
ρ=αη(X 1k,σ k) (6);
(5.2) if do not match, and k<K, increase the Gauss model of t two field picture, new Gauss model distributes and gets X I, tValue be average, variance, weights omega are empirical value;
(5.3) if do not match, and k=K, replacing the minimum Gaussian Profile of weight in K the Gauss model distribution with new Gaussian Profile, new Gauss model distributes and gets X I, tValue be average, variance, weights omega are empirical value;
(5.4), the more new formula of weights omega is (7):
ω k,t=(1-α)ω k,t-1+α(M k,t) (7)
In the following formula, ω K, tBe current weight, α is a learning rate, ω K, t-1Respective weights for previous frame; M K, tBe the coupling quantized value, if coupling: M K, t=1, if do not match: M K, t=0;
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
Further, in described (6), the feature of extracting moving target comprises:
(6.1), calculate the area of moving region: the length of side of establishing square pixels is h, the area S computing formula following (8) of regional A:
S = Σ ( x , y ) ⋐ A h 2 - - - ( 8 )
In the following formula, point (x, y) belong in the regional A have a few;
(6.2), center, zoning: according to the center of gravity that calculates a little following (9), (10) in the All Ranges A:
x ‾ = 1 / S Σ ( x - y ) ∈ A x - - - ( 9 )
y ‾ = 1 / S Σ ( x - y ) ∈ A y - - - ( 10 )
(6.3), calculate the length and the width of moving target: the minimum boundary rectangle MER that uses object, the border of object is with the increment rotation of predetermined angle, behind increment of each rotation, rectangle MER with a horizontal positioned comes its border of match, note minimum and maximum X, the Y value of rotation back boundary point, area as MER reaches minimum value, and described MER size can be the length and the width of target.
Further again, in described (6), the feature of extracting moving target also comprises:
(6.4), bending moment not: (x y), calculates with all points that belong in the zone, if limited point on the XY plane is non-vanishing continuously and only in its segmentation, judges that its each rank square exists for the digital image function f.
Further, in described (6), extract before the feature of moving target, earlier to moving image binary conversion treatment and image expansion, corrosion, dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object; Corrosion is to remove isolated noise foreground point, eliminates all boundary points of object; Obtain the profile of moving vehicle, it is kept in the profile attributes of self-defined structure body.
In described (3), all background threshold T gets T=0.7, the general span of learning rate α [0.01,0.001], and the general span [3,5] of K, initializes weights ω gets ω=0.05.
Operation principle of the present invention is: the luminance component in using covariance matrix, because interference of noise is bigger to the colourity informational influence, and it is smaller to the monochrome information influence, sacrifice chrominance information, the real-time of whole Traffic Flux Information Detection system improves greatly, but little to the influence of target extraction, by current image frame and background model are carried out difference, thereby obtain the accurate movement vehicle target, handle and image expansion by image binaryzation, corrosion, obtain the profile of moving vehicle, extract region area, regional barycenter, the minimum boundary rectangle (MER) of vehicle ' s contour and bending moment not, by control to region area, differentiating moving object is human body or vehicle, or other disturbance factor, the regional barycenter of extraction, the minimum boundary rectangle (MER) of vehicle ' s contour and not bending moment realize the detection and tracking of moving vehicle real-time and effective.
Beneficial effect of the present invention mainly shows: 1, the variation that can conform can overcome Changes in weather rainy, that mist is arranged and light variation slowly; 2, the computational speed of algorithm is fast, real-time is high, and the per second kind can be handled 16~17 frames; 3, simple to operate; 4, by video detection technology, detect the road traffic stream information in real time, detect road traffic condition, recording traffic flow data and road traffic condition information.
(4) description of drawings
Fig. 1 is the system flow frame diagram of method for collecting characteristics in the telecommunication flow information Video Detection.
Fig. 2 is based on the flow chart that improves mixed Gaussian background model collection apparatus.
(5) embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1, Fig. 2, method for collecting characteristics in a kind of telecommunication flow information Video Detection, this detection system comprises video camera, signal processor, following each processes pixel of narrating all is a series of under fixed cameras, and the inputted video image sequence is { 1,2, t ... promptly in the t frame video image, handle i pixel X I, t[R I, t, G I, t, B I, t] value, wherein the probability density function of k Gaussian Profile is formula (1):
Figure A20051006200400101
The probability calculation formula of current pixel point i is formula (2):
P ( X i , t ) = Σ i = 1 k ω i , t - 1 , k * η k ( x t , μ i , t - 1 , Σ i , t - 1 , k ) - - - ( 2 )
This method for collecting characteristics may further comprise the steps:
(1), get R, G, B color space image sequence, comprise noise signal certainly, with medium filtering image filtering is removed and make an uproar from the high resolution CCD video camera.Medium filtering is a kind of way commonly used of removing the image random noise, makes an uproar because in the traffic flow image, use low-pass filtering to remove, because edge contour contains a large amount of high-frequency informations, so, in the time of filtered noise, the border is fogged; Use high-pass filtering, in filtering, noise also has been reinforced, so suppress noise signal with the median filtering method in the spatial alternation, suppresses the noise in the image and has kept the clear of profile.
(2), the processed images sequence is carried out the conversion of color space, R, G, B color space are changed into brightness space, S (brightness space)=(R+G+B)/3.
(3), set the parameter of mixed Gaussian algorithm, described parameter comprises that global context threshold value T (having determined background distributions model number) generally gets T=0.7, the general span [0.01 of learning rate α, 0.001], here get α=0.005, the general span [3,5] of K, the value of K is big more, system can characterize complicated more scene, but the corresponding calculated amount increases, and this algorithm is got K=3, initializes weights ω generally gets a less value, assignment ω=0.05.
(4), the brightness space of reading images, with the brightness value of first each pixel of the two field picture average as mixed Gaussian, variance is given a bigger value, what generally give also is empirical value, here our assignment σ=20; Set up single Gaussian mixture model-universal background model (initialization of mixed Gauss model).
(5), read the t two field picture, with the existing k of each pixel and this pixel (k<=K) individual Gauss model is compared, and relatively whether following formula (3) is set up:
|X i,tk|<2.5σ k (3);
(5.1) if mate, then upgrade the parameter and the weight of k mixed Gauss model.Parameter comprises expectation, variance, referring to formula (4), (5), (6):
μ t=(1-ρ)μ t-1+ρX t (4)
σ t 2=(1-ρ)σ t-1 2+ρ(X tt) T(X tt) (5)
ρ=αη(X tk,σ k) (6)
(5.2) if do not match, and k<K, increase-individual Gauss model, new Gauss model distribute and get X I, tValue be average, give bigger variance (empirical value) that we get σ=20, less weights ω=0.05 here.
(5.3) if do not match, and k=K, the minimum Gaussian Profile of weight replaced with new Gaussian Profile.The value of average and variance is the same.
(5.4), the more new formula of weights omega is (7):
ω k,t=(1-α)ω k,t-1+α(M k,t) (7)
In the following formula, ω K, tBe current weight, α is a learning rate, ω K, t-1Respective weights for previous frame; M K, tBe the coupling quantized value, if coupling: M K, t=1, if do not match: M K, t=0; Learning rate α wherein, if the α value is bigger, the ability that changes of conforming is more intense, what the background of change was very fast is dissolved in the background model, but affected by noise easily.If the α value is smaller, the ability that conforms is lower, temporary transient static object can be thought that the background that changes is dissolved into background model.
Read a new frame, repeat (5.1)~(5.4) step, set up the Gaussian Profile of background model.
(6), background model and the present image of setting up carried out the absolute value difference, extract the feature of moving target, obtaining vehicle ' s contour, tracking parameter through processing.
Earlier foreground target is carried out binary conversion treatment and image expansion, corrosion.Dilation operation is an aperture of filling up the target area, and all background dots that will contact with object merge to the process in this object.Corrosion is to remove isolated noise foreground point, eliminates a kind of process of all boundary points of object, by these two kinds of morphologic processing, obtains the more accurate profile of moving vehicle, and it is kept in the profile attributes of self-defined structure body.
By the extraction to target, we have just obtained interested target, also need clarification of objective is extracted and described.Present target signature can be divided into gray feature, textural characteristics and geometric properties.Introduce several features that adopt when native system is followed the tracks of below:
1) region area
Region area is an essential characteristic in zone, and he describes the size in zone, and the length of side of establishing square pixels is h, then the computing formula of its area S following (8):
S = Σ ( x , y ) ⋐ A h 2 - - - ( 8 )
Point (x, y) be belong in the regional A have a few.Can be by calculating the area of moving region, judge that whether moving target is disturbing factors such as non-vehicle, with crossing the width of judging moving vehicle, detects overlapping vehicle.When moving vehicle area the time, judge that then vehicle rolls the effective coverage away from less than certain threshold value.
2) regional barycenter
Regional barycenter is a kind of global description symbol, and the coordinate of regional barycenter is to calculate according to all points that belong to the zone, although always the coordinate integer of regional each point, but the coordinate Chang Buwei integer of regional barycenter.Relative with each interregional distance when very little in size of zone itself, the particle that the zone can be used for barycentric coodinates is similar to representative.
According to the center of gravity that calculates a little following (9), (10) in the All Ranges A:
x ‾ = 1 / S Σ ( x - y ) ∈ A x - - - ( 9 )
y ‾ = 1 / S Σ ( x - y ) ∈ A y - - - ( 10 )
3) length and width
Calculating it after an object extracts from a sub-picture is easier in the span of level and vertical direction, as long as know that the minimum and maximum ranks of object are number just passable, but for the object that moves at random, level is interested direction with vertical and the top of differing. can use the minimum boundary rectangle (MER) of object in this case.
Use the MER technology, the border of object rotates to 900 with about 30 increment, behind increment of each rotation, MER with a horizontal positioned comes its border of match, in order to calculate needs, only need note minimum and maximum X, the Y value of rotation back boundary point, in certain anglec of rotation, the area of MER reaches minimum value.At this moment the size of MER can be used for representing the length width of this object.The MER anglec of rotation hour has been drawn the major axes orientation of this object.This technology is particularly useful for the object of rectangular shape, can provide satisfactory result for vehicle detection.Overlapping when vehicle, when movement velocity is too fast, when the moving vehicle profile is followed the tracks of, the mathematical feature of target is mated, can effectively improve the precision of tracking.
4) bending moment not
The square in zone also can be used as feature and considers on the plane of delineation, and (x, y), if limited point on the XY plane is non-vanishing continuously and only in its segmentation, then provable its each rank square exists for the digital image function f.The square in zone is to calculate with all points that belong in the zone, thereby not too is subjected to the influence of noise etc.
(7), extract the profile, center of gravity, area, minimum boundary rectangle (MER) of object and displacement not, these several key characters leave them in the self-structure body in,
(8), repeat (5)~(7) if new moving vehicle is arranged.
Use improved mixture gaussian modelling to come the feature of each pixel in the phenogram picture frame, the feature that characterizes only adopts brightness, if there is not moving target (vehicle) to exist, then video image is static relatively, each pixel changes in time all obeys certain statistical model, and each pixel is characterized by the mixed model of K Gaussian Profile in this algorithm.When obtaining new picture frame, upgrade mixture gaussian modelling, if the pixel of present image and mixture gaussian modelling are complementary, judge that then this point is a background dot, otherwise judge that this point is the foreground point.Background model and the present image set up are carried out the absolute value difference, the absolute value difference can prevent overflowing of pixel, the white point that image occurs, the better controlled noise, obtain more accurate vehicle ' s contour through handling, parameter with needs are followed the tracks of can reach good effect in the detection of the vehicle numeration and the speed of a motor vehicle.

Claims (5)

1、一种交通流信息视频检测中的特征采集方法,该检测系统包括摄像机、信号处理器,摄像机输入视频图像序列为{1,2,…,t,…},即在第t帧视频图像中,处理第i个像素点Xi,t[Ri,t,Gi,t,Bi,t]的值,第k个高斯分布的概率密度函数为式(1):1. A feature collection method in traffic flow information video detection, the detection system includes a camera and a signal processor, and the camera input video image sequence is {1, 2, ..., t, ...}, that is, the tth frame video image , process the value of the i-th pixel X i,t [R i,t ,G i,t ,B i,t ], the probability density function of the k-th Gaussian distribution is formula (1): 当前像素点i的概率计算式为式(2):The probability calculation formula of the current pixel point i is formula (2): PP (( Xx itit )) == &Sigma;&Sigma; ii == 11 kk ww itit -- LkLk ** &eta;&eta; kk (( Xx tt ,, &mu;&mu; ii ,, tt -- 11 ,, &Sigma;&Sigma; ii ,, tt -- LkLk )) -- -- -- (( 22 )) 该方法包括以下步骤:The method includes the following steps: (1)、采集摄像机的视频图像,得到R、G、B颜色空间图像序列,用中值滤波对图像滤波除噪;(1), gather the video image of video camera, obtain R, G, B color space image sequence, use median filtering to denoising image filtering; (2)、对处理过的图像序列进行颜色空间的转化,将R、G、B颜色空间转化成亮度空间,S(亮度空间)=(R+G+B)/3;(2), carry out the conversion of color space to the processed image sequence, convert R, G, B color space into brightness space, S (brightness space)=(R+G+B)/3; (3)、设定混合高斯算法的参数,所述的参数包括全局背景阈值T、学习率α、高斯分布模型个数K、初始化权重ω;(3), setting the parameter of mixed Gaussian algorithm, described parameter comprises global background threshold value T, learning rate α, Gaussian distribution model number K, initialization weight ω; (4)、读取图像的亮度空间,将第一帧图像每一个像素点的亮度值作为混合高斯的均值,方差为预定的经验值,建立单高斯混合背景模型;(4), read the brightness space of the image, use the brightness value of each pixel of the first frame image as the mean value of the mixed Gaussian, and the variance is a predetermined empirical value, and set up a single Gaussian mixed background model; (5)、读取第t帧图像,将每个像素与该像素的已有的k(k<=K)个高斯模型相比较,比较下式(3)是否成立:(5), read the tth frame image, compare each pixel with the existing k (k<=K) Gaussian models of the pixel, and compare whether the following formula (3) is established: |Xi,tk|<2.5σk       (3);|X i,tk |<2.5σ k (3); (5.1)、如果匹配,更新第k个混合高斯模型的参数和权重,参数包括期望、方差,参见式(4)、(5)、(6):(5.1), if it matches, update the parameters and weights of the kth mixed Gaussian model, the parameters include expectation and variance, see formula (4), (5), (6): μt=(1-ρ)μt-1+ρXt                  (4)μ t =(1-ρ)μ t-1 +ρX t (4) σt 2=(1-ρ)σt-1 2+ρ(Xtt)T(Xtt) (5)σ t 2 =(1-ρ)σ t-1 2 +ρ(X tt ) T (X tt ) (5) ρ=αη(Xtk,σk)                      (6);ρ=αη(X tk , σ k ) (6); (5.2)、如果不匹配,且k<K,增加第t帧图像的高斯模型,新的高斯模型分布取Xi,t的值为均值,方差、权重ω为经验值;(5.2), if it does not match, and k<K, increase the Gaussian model of the t-th frame image, the new Gaussian model distribution takes Xi , and the value of t is the mean value, and the variance and weight ω are empirical values; (5.3)、如果不匹配,而且k=K,用新的高斯分布代替K个高斯模型分布中权重最低的高斯分布,新的高斯模型分布取Xi,t的值为均值,方差、权重ω为经验值;(5.3), if it does not match, and k=K, replace the Gaussian distribution with the lowest weight among the K Gaussian model distributions with a new Gaussian distribution, the new Gaussian model distribution takes Xi , and the value of t is the mean value, variance, weight ω is the experience value; (5.4)、权重ω的更新公式为(7):(5.4), the update formula of weight ω is (7): ωk,t=(1-α)ωk,t-1+α(Mk,t)       (7)ω k,t = (1-α)ω k,t-1 +α(M k,t ) (7) 上式中,ωk,t为当前的权重,α为学习率,ωk,t-1为上一帧的对应权重;Mk,t为匹配量化值,如果匹配:Mk,t=1,如果不匹配:Mk,t=0;In the above formula, ω k, t is the current weight, α is the learning rate, ω k, t-1 is the corresponding weight of the previous frame; M k, t is the matching quantization value, if matching: M k, t = 1 , if not matched: M k, t = 0; (6)、将建立的背景模型与当前图像进行绝对值差分,提取运动目标的特征,经过处理得到车辆轮廓、跟踪参数。(6) Perform absolute value difference between the established background model and the current image, extract the features of the moving target, and obtain the vehicle outline and tracking parameters after processing. 2、如权利要求1所述的一种交通流信息视频检测中的特征采集方法,其特征在于:所述的(6)中,提取运动目标的特征包括:2. The feature collection method in a kind of traffic flow information video detection as claimed in claim 1, is characterized in that: in described (6), extracting the feature of moving target comprises: (6.1)、计算运动区域的面积:设正方形象素的边长为h,区域A的面积S计算公式如下(8):(6.1), Calculating the area of the motion area: Let the side length of the square pixel be h, and the calculation formula for the area S of the area A is as follows (8): SS == &Sigma;&Sigma; (( xyxy )) &Element;&Element; AA hh 22 -- -- -- (( 88 )) 上式中,点(x,y)属于区域A内的所有点;In the above formula, the point (x, y) belongs to all points in the area A; (6.2)、计算区域中心:根据所有区域A内的所有点计算出重心如下(9)、(10):(6.2), calculate the center of the area: calculate the center of gravity according to all points in all areas A as follows (9), (10): xx &OverBar;&OverBar; == 11 // SS &Sigma;&Sigma; (( xx -- ythe y )) &Element;&Element; AA xx -- -- -- (( 99 )) ythe y &OverBar;&OverBar; == 11 // SS &Sigma;&Sigma; (( xx -- ythe y )) &Element;&Element; AA ythe y -- -- -- (( 1010 )) (6.3)、计算运动目标的长度和宽度:应用物体的最小外接矩形MER,物体的边界以预设角度的增量旋转,每次旋转一个增量后,用一个水平放置的矩形MER来拟合其边界,记录下旋转后边界点的最大和最小X、Y值,如MER的面积达到最小值,所述的MER尺寸可为目标的长度和宽度。(6.3) Calculate the length and width of the moving target: apply the minimum circumscribed rectangle MER of the object, and the boundary of the object is rotated in increments of preset angles. After each increment, a horizontally placed rectangle MER is used to fit For its boundary, record the maximum and minimum X and Y values of the boundary point after rotation, if the area of the MER reaches the minimum value, the size of the MER can be the length and width of the target. 3、如权利要求2所述的一种交通流信息视频检测中的特征采集方法,其特征在于:所述的(6)中,提取运动目标的特征还包括:3. The feature collection method in a kind of traffic flow information video detection as claimed in claim 2, is characterized in that: in described (6), extracting the feature of moving target also comprises: (6.4)、不变矩:对于数字图象函数f(x,y),用所有属于区域内的点计算,如果它分段连续且只在X Y平面上的有限个点不为零,判断它的各阶矩存在。(6.4), invariant moment: for the digital image function f(x, y), calculate with all points belonging to the area, if it is continuous in segments and only finite points on the X Y plane are not zero, judge Its various order moments exist. 4、如权利要求1-3之一所述的一种交通流信息视频检测中的特征采集方法,其特征在于:在所述的(6)中,提取运动目标的特征之前,先对运动图像二值化处理和图像膨胀、腐蚀,膨胀运算是填补目标区域的小孔,将与某物体接触的所有背景点合并到该物体中的过程;腐蚀是去除孤立的噪声前景点,消除物体的所有边界点;得到运动车辆的轮廓,将其保存在自定义结构体的轮廓属性中。4. The feature collection method in a kind of traffic flow information video detection as claimed in any one of claims 1-3, characterized in that: in said (6), before extracting the feature of the moving target, the moving image is first Binarization processing and image expansion and erosion, the expansion operation is to fill the small holes in the target area, and merge all the background points that are in contact with an object into the object; erosion is to remove isolated noise foreground points, and eliminate all the objects. Boundary point; Get the outline of the moving vehicle and save it in the outline attribute of the custom structure. 5、如权利要求4所述的一种交通流信息视频检测中的特征采集方法,其特征在于:在所述的(3)中,全部背景阈值T取T=0.7,学习率α一般取值范围[0.01,0.001],K的一般取值范围[3,5],初始化权重ω取ω=0.05。5. The feature collection method in a kind of traffic flow information video detection as claimed in claim 4, characterized in that: in said (3), all background thresholds T are taken as T=0.7, and learning rate α is generally taken as a value The range is [0.01, 0.001], the general value range of K is [3, 5], and the initialization weight ω is ω=0.05.
CNB2005100620043A 2005-12-14 2005-12-14 A Feature Acquisition Method in Video Detection of Traffic Flow Information Expired - Fee Related CN100502463C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100620043A CN100502463C (en) 2005-12-14 2005-12-14 A Feature Acquisition Method in Video Detection of Traffic Flow Information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100620043A CN100502463C (en) 2005-12-14 2005-12-14 A Feature Acquisition Method in Video Detection of Traffic Flow Information

Publications (2)

Publication Number Publication Date
CN1984236A true CN1984236A (en) 2007-06-20
CN100502463C CN100502463C (en) 2009-06-17

Family

ID=38166433

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100620043A Expired - Fee Related CN100502463C (en) 2005-12-14 2005-12-14 A Feature Acquisition Method in Video Detection of Traffic Flow Information

Country Status (1)

Country Link
CN (1) CN100502463C (en)

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282461B (en) * 2007-04-02 2010-06-02 财团法人工业技术研究院 image processing method
CN101437113B (en) * 2007-11-14 2010-07-28 汉王科技股份有限公司 Apparatus and method for detecting self-adapting inner core density estimation movement
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
CN101431665B (en) * 2007-11-08 2010-09-15 财团法人工业技术研究院 Method and system for detecting and tracking object
CN101883209A (en) * 2010-05-31 2010-11-10 中山大学 A method of video background detection based on the combination of background model and three-frame difference
CN101882311A (en) * 2010-06-08 2010-11-10 中国科学院自动化研究所 Acceleration Method of Background Modeling Based on CUDA Technology
CN101527838B (en) * 2008-03-04 2010-12-08 华为技术有限公司 Method and system for feedback object detection and tracking of video objects
CN101916447A (en) * 2010-07-29 2010-12-15 江苏大学 A Robust Moving Object Detection and Tracking Image Processing System
CN101639983B (en) * 2009-08-21 2011-02-02 任雪梅 Multilane traffic volume detection method based on image information entropy
CN101964113A (en) * 2010-10-02 2011-02-02 上海交通大学 Method for detecting moving target in illuminance abrupt variation scene
CN101980300A (en) * 2010-10-29 2011-02-23 杭州电子科技大学 A motion detection method based on 3G smart phone
CN102043950A (en) * 2010-12-30 2011-05-04 南京信息工程大学 Vehicle outline recognition method based on canny operator and marginal point statistic
CN102081802A (en) * 2011-01-26 2011-06-01 北京中星微电子有限公司 Method and device for detecting color card based on block matching
CN101303732B (en) * 2008-04-11 2011-06-22 西安交通大学 Moving target perception and warning method based on vehicle-mounted monocular camera
CN101448151B (en) * 2007-11-28 2011-08-17 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
CN102236968A (en) * 2010-05-05 2011-11-09 刘嘉 Remote intelligent monitoring system for transport vehicle
CN102385705A (en) * 2010-09-02 2012-03-21 大猩猩科技股份有限公司 Abnormal behavior detection system and method using multi-feature automatic clustering method
CN101909145B (en) * 2009-06-05 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image noise filtering system and method
CN101635026B (en) * 2008-07-23 2012-05-23 中国科学院自动化研究所 A method for detection of abandoned objects without tracking process
CN102521580A (en) * 2011-12-21 2012-06-27 华平信息技术(南昌)有限公司 Real-time target matching tracking method and system
CN102693637A (en) * 2012-06-12 2012-09-26 北京联合大学 Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad
CN101872279B (en) * 2009-04-23 2012-11-21 深圳富泰宏精密工业有限公司 Electronic device and method for adjusting position of display image thereof
CN102799857A (en) * 2012-06-19 2012-11-28 东南大学 Video multi-vehicle outline detection method
CN102867193A (en) * 2012-09-14 2013-01-09 成都国科海博计算机系统有限公司 Biological detection method and device and biological detector
CN103150738A (en) * 2013-02-02 2013-06-12 南京理工大学 Detection method of moving objects of distributed multisensor
CN101540103B (en) * 2008-03-17 2013-06-19 上海宝康电子控制工程有限公司 Method and system for traffic information acquisition and event processing
CN103272783A (en) * 2013-06-21 2013-09-04 核工业理化工程研究院华核新技术开发公司 Color determination and separation method for color CCD color sorting machine
RU2506640C2 (en) * 2012-03-12 2014-02-10 Государственное казенное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) Method of identifying insert frames in multimedia data stream
CN103578121A (en) * 2013-11-22 2014-02-12 南京信大气象装备有限公司 Motion detection method based on shared Gaussian model in disturbed motion environment
CN103646544A (en) * 2013-11-15 2014-03-19 天津天地伟业数码科技有限公司 Vehicle-behavior analysis and identification method based on holder and camera device
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
CN104036288A (en) * 2014-05-30 2014-09-10 宁波海视智能系统有限公司 Vehicle type classification method based on videos
CN104267209A (en) * 2014-10-24 2015-01-07 浙江力石科技股份有限公司 Method and system for expressway video speed measurement based on virtual coils
CN104950285A (en) * 2015-06-02 2015-09-30 西安理工大学 RFID (radio frequency identification) indoor positioning method based on signal difference value change of neighboring tags
US9153028B2 (en) 2012-01-17 2015-10-06 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
CN105472204A (en) * 2014-09-05 2016-04-06 南京理工大学 Inter-frame noise reduction method based on motion detection
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
CN106412501A (en) * 2016-09-20 2017-02-15 华中科技大学 Construction safety behavior intelligent monitoring system based on video and monitoring method thereof
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US9945660B2 (en) 2012-01-17 2018-04-17 Leap Motion, Inc. Systems and methods of locating a control object appendage in three dimensional (3D) space
CN108694833A (en) * 2018-07-17 2018-10-23 重庆交通大学 Traffic Abnormal Event Detection System Based on Binary Sensor
CN109035205A (en) * 2018-06-27 2018-12-18 清华大学苏州汽车研究院(吴江) Water hyacinth contamination detection method based on video analysis
CN109146914A (en) * 2018-06-20 2019-01-04 上海市政工程设计研究总院(集团)有限公司 A kind of drink-driving behavior method for early warning of the highway based on video analysis
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US10739862B2 (en) 2013-01-15 2020-08-11 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
CN113168704A (en) * 2019-02-22 2021-07-23 轨迹人有限责任公司 System and method for driving range batting path characterization
CN113286194A (en) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 Video processing method and device, electronic equipment and readable storage medium
CN116050963A (en) * 2022-12-27 2023-05-02 天翼物联科技有限公司 Distribution path selection method, system, device and medium based on traffic road conditions
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12154238B2 (en) 2014-05-20 2024-11-26 Ultrahaptics IP Two Limited Wearable augmented reality devices with object detection and tracking
US12260023B2 (en) 2012-01-17 2025-03-25 Ultrahaptics IP Two Limited Systems and methods for machine control
US12299207B2 (en) 2015-01-16 2025-05-13 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
US12314478B2 (en) 2014-05-14 2025-05-27 Ultrahaptics IP Two Limited Systems and methods of tracking moving hands and recognizing gestural interactions

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955940B (en) * 2012-11-28 2015-12-23 山东电力集团公司济宁供电公司 A kind of transmission line of electricity object detecting system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1027700A4 (en) * 1997-11-03 2001-01-31 T Netix Inc Model adaptation system and method for speaker verification
JP4336865B2 (en) * 2001-03-13 2009-09-30 日本電気株式会社 Voice recognition device
CN100367294C (en) * 2005-06-23 2008-02-06 复旦大学 Method for Segmenting Human Skin Regions in Color Digital Images and Videos

Cited By (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282461B (en) * 2007-04-02 2010-06-02 财团法人工业技术研究院 image processing method
CN101431665B (en) * 2007-11-08 2010-09-15 财团法人工业技术研究院 Method and system for detecting and tracking object
CN101437113B (en) * 2007-11-14 2010-07-28 汉王科技股份有限公司 Apparatus and method for detecting self-adapting inner core density estimation movement
CN101448151B (en) * 2007-11-28 2011-08-17 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
CN101527838B (en) * 2008-03-04 2010-12-08 华为技术有限公司 Method and system for feedback object detection and tracking of video objects
CN101540103B (en) * 2008-03-17 2013-06-19 上海宝康电子控制工程有限公司 Method and system for traffic information acquisition and event processing
CN101303732B (en) * 2008-04-11 2011-06-22 西安交通大学 Moving target perception and warning method based on vehicle-mounted monocular camera
CN101635026B (en) * 2008-07-23 2012-05-23 中国科学院自动化研究所 A method for detection of abandoned objects without tracking process
CN101872279B (en) * 2009-04-23 2012-11-21 深圳富泰宏精密工业有限公司 Electronic device and method for adjusting position of display image thereof
CN101909145B (en) * 2009-06-05 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image noise filtering system and method
CN101639983B (en) * 2009-08-21 2011-02-02 任雪梅 Multilane traffic volume detection method based on image information entropy
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
CN102236968B (en) * 2010-05-05 2015-08-19 刘嘉 Intelligent remote monitoring system for transport vehicle
CN102236968A (en) * 2010-05-05 2011-11-09 刘嘉 Remote intelligent monitoring system for transport vehicle
CN101883209B (en) * 2010-05-31 2012-09-12 中山大学 Method for integrating background model and three-frame difference to detect video background
CN101883209A (en) * 2010-05-31 2010-11-10 中山大学 A method of video background detection based on the combination of background model and three-frame difference
CN101882311A (en) * 2010-06-08 2010-11-10 中国科学院自动化研究所 Acceleration Method of Background Modeling Based on CUDA Technology
CN101916447B (en) * 2010-07-29 2012-08-15 江苏大学 Robust motion target detecting and tracking image processing system
CN101916447A (en) * 2010-07-29 2010-12-15 江苏大学 A Robust Moving Object Detection and Tracking Image Processing System
CN102385705A (en) * 2010-09-02 2012-03-21 大猩猩科技股份有限公司 Abnormal behavior detection system and method using multi-feature automatic clustering method
CN102385705B (en) * 2010-09-02 2013-09-18 大猩猩科技股份有限公司 Abnormal behavior detection system and method using multi-feature automatic clustering method
CN101964113A (en) * 2010-10-02 2011-02-02 上海交通大学 Method for detecting moving target in illuminance abrupt variation scene
CN101980300A (en) * 2010-10-29 2011-02-23 杭州电子科技大学 A motion detection method based on 3G smart phone
CN102043950A (en) * 2010-12-30 2011-05-04 南京信息工程大学 Vehicle outline recognition method based on canny operator and marginal point statistic
CN102043950B (en) * 2010-12-30 2012-11-28 南京信息工程大学 Vehicle outline recognition method based on canny operator and marginal point statistic
CN102081802A (en) * 2011-01-26 2011-06-01 北京中星微电子有限公司 Method and device for detecting color card based on block matching
CN102521580A (en) * 2011-12-21 2012-06-27 华平信息技术(南昌)有限公司 Real-time target matching tracking method and system
US9697643B2 (en) 2012-01-17 2017-07-04 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10699155B2 (en) 2012-01-17 2020-06-30 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US12260023B2 (en) 2012-01-17 2025-03-25 Ultrahaptics IP Two Limited Systems and methods for machine control
US9767345B2 (en) 2012-01-17 2017-09-19 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US9934580B2 (en) 2012-01-17 2018-04-03 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US12086327B2 (en) 2012-01-17 2024-09-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US11994377B2 (en) 2012-01-17 2024-05-28 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9741136B2 (en) 2012-01-17 2017-08-22 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US11782516B2 (en) 2012-01-17 2023-10-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US9672441B2 (en) 2012-01-17 2017-06-06 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9652668B2 (en) 2012-01-17 2017-05-16 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US11308711B2 (en) 2012-01-17 2022-04-19 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9153028B2 (en) 2012-01-17 2015-10-06 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US10767982B2 (en) 2012-01-17 2020-09-08 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US9778752B2 (en) 2012-01-17 2017-10-03 Leap Motion, Inc. Systems and methods for machine control
US9626591B2 (en) 2012-01-17 2017-04-18 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US9945660B2 (en) 2012-01-17 2018-04-17 Leap Motion, Inc. Systems and methods of locating a control object appendage in three dimensional (3D) space
US9436998B2 (en) 2012-01-17 2016-09-06 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US10565784B2 (en) 2012-01-17 2020-02-18 Ultrahaptics IP Two Limited Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space
US9495613B2 (en) 2012-01-17 2016-11-15 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging using formed difference images
US10410411B2 (en) 2012-01-17 2019-09-10 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10366308B2 (en) 2012-01-17 2019-07-30 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
RU2506640C2 (en) * 2012-03-12 2014-02-10 Государственное казенное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) Method of identifying insert frames in multimedia data stream
CN102693637A (en) * 2012-06-12 2012-09-26 北京联合大学 Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad
CN102693637B (en) * 2012-06-12 2014-09-03 北京联合大学 Signal lamp for prompting right-turn vehicle to avoid pedestrians at crossroad
CN102799857A (en) * 2012-06-19 2012-11-28 东南大学 Video multi-vehicle outline detection method
CN102799857B (en) * 2012-06-19 2014-12-17 东南大学 Video multi-vehicle outline detection method
CN102867193B (en) * 2012-09-14 2015-06-17 成都国科海博信息技术股份有限公司 Biological detection method and device and biological detector
CN102867193A (en) * 2012-09-14 2013-01-09 成都国科海博计算机系统有限公司 Biological detection method and device and biological detector
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US10097754B2 (en) 2013-01-08 2018-10-09 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US11874970B2 (en) 2013-01-15 2024-01-16 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US12405673B2 (en) 2013-01-15 2025-09-02 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11353962B2 (en) 2013-01-15 2022-06-07 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US10739862B2 (en) 2013-01-15 2020-08-11 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US12204695B2 (en) 2013-01-15 2025-01-21 Ultrahaptics IP Two Limited Dynamic, free-space user interactions for machine control
CN103150738A (en) * 2013-02-02 2013-06-12 南京理工大学 Detection method of moving objects of distributed multisensor
US11693115B2 (en) 2013-03-15 2023-07-04 Ultrahaptics IP Two Limited Determining positional information of an object in space
US12306301B2 (en) 2013-03-15 2025-05-20 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US10452151B2 (en) 2013-04-26 2019-10-22 Ultrahaptics IP Two Limited Non-tactile interface systems and methods
US12333081B2 (en) 2013-04-26 2025-06-17 Ultrahaptics IP Two Limited Interacting with a machine using gestures in first and second user-specific virtual planes
US11099653B2 (en) 2013-04-26 2021-08-24 Ultrahaptics IP Two Limited Machine responsiveness to dynamic user movements and gestures
CN103272783A (en) * 2013-06-21 2013-09-04 核工业理化工程研究院华核新技术开发公司 Color determination and separation method for color CCD color sorting machine
CN103272783B (en) * 2013-06-21 2015-11-04 核工业理化工程研究院华核新技术开发公司 Color Judgment and Separation Method of Color CCD Color Sorter
US11461966B1 (en) 2013-08-29 2022-10-04 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US11776208B2 (en) 2013-08-29 2023-10-03 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12236528B2 (en) 2013-08-29 2025-02-25 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US12086935B2 (en) 2013-08-29 2024-09-10 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11282273B2 (en) 2013-08-29 2022-03-22 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12242312B2 (en) 2013-10-03 2025-03-04 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12265761B2 (en) 2013-10-31 2025-04-01 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
CN103646544A (en) * 2013-11-15 2014-03-19 天津天地伟业数码科技有限公司 Vehicle-behavior analysis and identification method based on holder and camera device
CN103646544B (en) * 2013-11-15 2016-03-09 天津天地伟业数码科技有限公司 Based on the vehicle behavioural analysis recognition methods of The Cloud Terrace and camera apparatus
CN103578121B (en) * 2013-11-22 2016-08-17 南京信大气象装备有限公司 Method for testing motion based on shared Gauss model under disturbed motion environment
CN103578121A (en) * 2013-11-22 2014-02-12 南京信大气象装备有限公司 Motion detection method based on shared Gaussian model in disturbed motion environment
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US12314478B2 (en) 2014-05-14 2025-05-27 Ultrahaptics IP Two Limited Systems and methods of tracking moving hands and recognizing gestural interactions
US12154238B2 (en) 2014-05-20 2024-11-26 Ultrahaptics IP Two Limited Wearable augmented reality devices with object detection and tracking
CN104036288A (en) * 2014-05-30 2014-09-10 宁波海视智能系统有限公司 Vehicle type classification method based on videos
US12095969B2 (en) 2014-08-08 2024-09-17 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
CN105472204A (en) * 2014-09-05 2016-04-06 南京理工大学 Inter-frame noise reduction method based on motion detection
CN105472204B (en) * 2014-09-05 2018-12-14 南京理工大学 Inter-frame Noise Reduction Method Based on Motion Detection
CN104267209B (en) * 2014-10-24 2017-01-11 浙江力石科技股份有限公司 Method and system for expressway video speed measurement based on virtual coils
CN104267209A (en) * 2014-10-24 2015-01-07 浙江力石科技股份有限公司 Method and system for expressway video speed measurement based on virtual coils
US12299207B2 (en) 2015-01-16 2025-05-13 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN104950285A (en) * 2015-06-02 2015-09-30 西安理工大学 RFID (radio frequency identification) indoor positioning method based on signal difference value change of neighboring tags
CN104950285B (en) * 2015-06-02 2017-08-25 西安理工大学 A kind of RFID indoor orientation methods changed based on neighbour's label signal difference
CN106412501B (en) * 2016-09-20 2019-07-23 华中科技大学 A kind of the construction safety behavior intelligent monitor system and its monitoring method of video
CN106412501A (en) * 2016-09-20 2017-02-15 华中科技大学 Construction safety behavior intelligent monitoring system based on video and monitoring method thereof
CN109146914A (en) * 2018-06-20 2019-01-04 上海市政工程设计研究总院(集团)有限公司 A kind of drink-driving behavior method for early warning of the highway based on video analysis
CN109146914B (en) * 2018-06-20 2023-05-30 上海市政工程设计研究总院(集团)有限公司 Drunk driving behavior early warning method for expressway based on video analysis
CN109035205A (en) * 2018-06-27 2018-12-18 清华大学苏州汽车研究院(吴江) Water hyacinth contamination detection method based on video analysis
CN108694833A (en) * 2018-07-17 2018-10-23 重庆交通大学 Traffic Abnormal Event Detection System Based on Binary Sensor
US11986698B2 (en) 2019-02-22 2024-05-21 Trackman A/S System and method for driving range shot travel path characteristics
CN113168704A (en) * 2019-02-22 2021-07-23 轨迹人有限责任公司 System and method for driving range batting path characterization
CN113286194A (en) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 Video processing method and device, electronic equipment and readable storage medium
CN116050963A (en) * 2022-12-27 2023-05-02 天翼物联科技有限公司 Distribution path selection method, system, device and medium based on traffic road conditions

Also Published As

Publication number Publication date
CN100502463C (en) 2009-06-17

Similar Documents

Publication Publication Date Title
CN100502463C (en) A Feature Acquisition Method in Video Detection of Traffic Flow Information
CN102768804B (en) Video-based traffic information acquisition method
CN104658011B (en) A kind of intelligent transportation moving object detection tracking
CN106204572B (en) Depth estimation method of road target based on scene depth mapping
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN110008932A (en) A kind of vehicle violation crimping detection method based on computer vision
CN108983219A (en) A kind of image information of traffic scene and the fusion method and system of radar information
CN104463877B (en) A kind of water front method for registering based on radar image Yu electronic chart information
CN106845478A (en) The secondary licence plate recognition method and device of a kind of character confidence level
CN110210451B (en) A zebra crossing detection method
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN104183127A (en) Traffic surveillance video detection method and device
CN103116757B (en) A kind of three-dimensional information restores the road extracted and spills thing recognition methods
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN102073851A (en) Method and system for automatically identifying urban traffic accident
CN102289948A (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
Zhang et al. End to end video segmentation for driving: Lane detection for autonomous car
CN105868734A (en) Power transmission line large-scale construction vehicle recognition method based on BOW image representation model
CN103942560A (en) High-resolution video vehicle detection method in intelligent traffic monitoring system
CN104881645A (en) Vehicle front target detection method based on characteristic-point mutual information content and optical flow method
CN103093198A (en) Crowd density monitoring method and device
CN103425764A (en) Vehicle matching method based on videos
CN109359549A (en) A Pedestrian Detection Method Based on Mixed Gaussian and HOG_LBP
CN100382600C (en) Moving Object Detection Method in Dynamic Scene
CN105574895A (en) Congestion detection method during the dynamic driving process of vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090617