[go: up one dir, main page]

WO2012141663A1 - A method for individual tracking of multiple objects - Google Patents

A method for individual tracking of multiple objects Download PDF

Info

Publication number
WO2012141663A1
WO2012141663A1 PCT/TR2011/000082 TR2011000082W WO2012141663A1 WO 2012141663 A1 WO2012141663 A1 WO 2012141663A1 TR 2011000082 W TR2011000082 W TR 2011000082W WO 2012141663 A1 WO2012141663 A1 WO 2012141663A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
living
nonliving
tracking
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/TR2011/000082
Other languages
French (fr)
Inventor
Alptekin Temizel
Cigdem BEYAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to PCT/TR2011/000082 priority Critical patent/WO2012141663A1/en
Publication of WO2012141663A1 publication Critical patent/WO2012141663A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present invention relates to a tracking method, which is capable of individual tracking of multiple objects (such as people and their belongings) and an abandoned object detection method is proposed based on individual tracking of objects.
  • US2004120581A1 numbered patent application can be shown as an alternative method to eliminate the false alarms. It uses a motion pattern database to analyze motions of inside the movie.
  • CA2640931A1, RU2368952, US2004120581A1,US2009010493A1, US2010266159A1 and WO2008078112A1 are numbers of related patent applications, but they are not considered to be of particular relevance to this invention. Brief Description of the Invention
  • Figure 1 - is block diagram of the proposed tracking method's object discrimination and object tracking parts.
  • Figure 2 a - is visible band Image
  • Figure 3a - is an unprocessed thermal image.
  • Figure 3b - is a segmentation results for image of 3a.
  • Figure 4a - is an example of visible band image.
  • Figure 4b - is a thermal form of an image 4a.
  • Figure 4c - is an image 4a of object discrimination result with error.
  • Figure 4d - is segmented living object of image 4a.
  • Figure 4e - is segmented living non-object of image 4a.
  • Figure 5 - is Improved, Adaptive Mean Shift Tracking method.
  • Figure 6 (n-1) - is non-occlusion detected form of analyzed image.
  • Figure 6 (n) - is occlusion detected form of analyzed image.
  • Figure 7 - is the correspondence based object matching after re-initialization of trackers.
  • Figure 8 Frame # 76 - is an example of association of objects with their owners.
  • Figure 8 Frame # 82 - is an example of association of objects with their owners.
  • Figure 8 Frame # 95 - is an example of association of objects with their owners.
  • Figure 8 Frame # 106 - is an example of association of objects with their owners.
  • Figure 8 Frame # 109 - is an example of association of objects with their owners.
  • Figure 8 Frame # 125 - is an example of association of objects with their owners.
  • Figure 9 Frame # 168 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 199 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 334 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 415 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 427 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 475 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 503 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 531 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 544 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 556 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 562 - is an example of detection of an abandoned item.
  • Figure 9 Frame # 589 - is an example of detection of an abandoned item.
  • the inventive tracking method comprises two main image-analyzing groups of steps.
  • the aim of the first group of steps is discrimination of living and non living objects.
  • the second group of steps is related with object tracking. Discrimination starts with background subtraction (101), which is applied to the visible band image. Then, connected component analysis is utilized to remove the noise (102). On the other hand, local intensity operation is applied to thermal image (103) and the result of this operation is post-processed to complete (104) and close possible holes which might be formed after local intensity operation.
  • the second group of object discrimination (105) step is the fusion step which uses both modalities.
  • a rule based method and connected component analysis are used to extract objects and classify them as living or nonliving.
  • each object (living and/or nonliving) is tracked using our improved, adaptive mean shift tracking algorithm (106). While tracking objects, living and nonliving objects are also associated with each other and owner/carried object relation is set for tracked objects (107). Abandoning of an object was detected by using these relations and tracking the objects separately (108).
  • x (t is the value of pixel at time t
  • BG is the background
  • FG is the foreground
  • pi ⁇ 2 , MM and Oi
  • ⁇ 2 ⁇ ⁇ are the estimates of mean and variance for the Gaussian components respectively.
  • ⁇ , n 2 , , n M are the weight values that are nonnegative and summation is equal to 1.
  • Equations (2), (3) and (4) show how Gaussian model parameters are being updated.
  • m is set to one if its "close” component to largest n m and the others are set to 0.
  • New sample is "close” to component if the Mahalanobis distance between them is less than four standard deviations. Square distance from mth component can be calculated by using Eq. (5). If the new sample is close to the component, the new sample belongs to 99% confidence level and can be determined as a part of foreground.
  • background subtraction is applied to extract a stationary background image.
  • Any background subtraction algorithm can be used in this stage, as long as the stationary background image allows discrimination of foreground objects.
  • D N/A rect (6)
  • D is the density of object
  • N is the number of pixels that object has
  • a rec t is the area of the bounding rectangle.
  • each connected component is classified as noise and removed from the image if its density is smaller than the density threshold and the number of pixels that belongs to this object is smaller than the maximum number of pixel threshold.
  • An example result of background subtraction and noise removal step is shown in Figure 2.
  • thermal domain images are constructed from energy emitted by objects and living objects emit more energy compared to nonliving objects, pixels of living objects appear brighter than pixels of nonliving objects (in white-hot setting).
  • the invention uses local intensity operation (LIO) (R. Heriansyah and S.A.R. Abu-Bakar, "Defect detection in thermal image for nondestructive evaluation of petrochemical equipments", NDT & E International, Vol. 42, Issue 8, pp. 729-740, Dec. 2009.) for defect detection in thermal images. We also utilized this operator, which brightens the bright pixels and darkens the dark pixels, in a similar fashion to segment pixels belonging to living objects.
  • LIO local intensity operation
  • I(x,y) is given as a pixel in thermal image written as z 0 , and neighbors of it I(x-l,y-l), I(x-l,y), I(x-l,y+l), I(x,y-1), I(x,y+1) , I(x+l,y-l), I(x+l,y), I(x+l,y+l) are written as z z 2 , z 3 , z 4 , z s , z 5/ z 7 , z 8 respectively. Then, Z will be product of the neighboring pixels:
  • a new image is created according to Z for each pixel in thermal image by defining intensity brightness operation by using Eq. (8).
  • g(x, y) Z (8)
  • g(x, y) is the pixel value at (x, y) of new image.
  • these image pixels are normalized to gray-scale range.
  • the normalization process is done by dividing these pixels into the maximum pixel value within the image.
  • MAT Mean Absolute Thresholding
  • T round ⁇ J where T is the threshold value, I max is the maximum pixel value, I min is the minimum pixel value.
  • hot objects such as heating systems, radiators or any nonliving objects which are hotter than the environment are also captured brighter than the other objects and segmented as a result of this process.
  • our method does not discriminate these objects as people and hence, false alarms due to stationary hot objects are prevented.
  • the algorithm above may not find the object precisely and some gaps may be observed on the object's body due to the clothing. These problems are rectified with postprocessing. To make these objects single piece, it is needed to complete and close these holes in binary images by using some morphological operations. First, objects in binary images (result of local intensity operation, Figure 3b) are completed by hole-filling. Then, these binary objects are closed.
  • the object discrimination step is the fusion step that both thermal and visible band images are used. It is the main step for individual tracking of objects such as people and their belongings.
  • the objects coming from result of background subtraction and noise removal (a binary image) in visible data and the objects coming from result of local intensity operation and post processing (a binary image) in thermal data are utilized.
  • the mean shift tracking method is an optimization algorithm based on object representation ⁇ . Comaniciu, V. Ramesh, and P. Meer, "Kernel-based object tracking", IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, pp. 564-577, May 2003). It is an iterative scheme, uses a nonparametric kernel and executes until the goal is attained. It basically tries to find an object in the next image frame which is most similar and close to the initialized object (object model) in the current frame. It compares the histogram of the object model and histogram of the candidate object in the next frame. The aim is to maximize the similarity between the two histograms.
  • k is the kernel function which gives more weight to the pixels at the centre of the model
  • C is a normalizing constant which provides that sum of the histogram elements is 1
  • u represents histogram bin
  • n is the number of pixels in the object model.
  • 8 is the Kronecker delta function and b represents histogram binning function for pixels at location
  • a candidate model is constructed. Similar to the target model's pdf, the candidate model's pdf at location y is given by h ⁇ ⁇ ⁇ - (i2) where h is the kernel size which provides the size of the candidate objects.
  • Object detection is the first step for object tracking and this could be either manual or automatic.
  • many studies assume that the objects which will be tracked are selected manually by an operator. However, if manual initialization is used since new objects could not be tracked when they enter the scene after the initialization frame, it is expected that the operator regularly selects all the new objects, which prevents system to be automatic.
  • automatic initialization any new object entering the scene could to be tracked without any need for a human operator.
  • a fully automatic system is proposed and for initialization of the objects' bounding boxes results of the object discrimination step are used and for each nonliving or living object a tracker is defined. Additionally, these bounding boxes are used as a mask to decrease the search area of the mean shift tracker. This increases the proposed system's tracking accuracy and performance. Due to the fact that it reduces the search area of the frame, the required number of iterations to find the new position of object model is decreased.
  • trackers To handle the changes in size or shape, we update trackers every 25 frames. To detect new objects as well as objects that leave the scene, numbers of objects in adjacent frames are compared and if those numbers are not equal then trackers are updated. To handle occlusion and split and to detect newly emerging objects; the locations of bounding box of each object are compared. If an intersection exits then trackers are refreshed to handle the inclusion of front objects color. Separate trackers are initialized for each living and nonliving object and the same algorithm is used independent of the object type.
  • Closeness is defined as the distance between the centre of mass of these two objects ( ⁇ ⁇ and o p ) and Euclidean formula (Eq. 14) is used to calculate this distance. Similarity, on the other hand, is calculated by using the size ratio of the objects (Eq. 15). If distance between object o t and object o p is smaller than a distance threshold and the size ratio of objects o i and smaller than a size threshold, we define object o i as a corresponding object for o p .
  • closeness is a successful criterion since the displacement of an object between adjacent frames should be small. However, it is not a sufficient criterion since objects that are close to each other in previous frame may interfere and the matching might be incorrect. Therefore, similarity criterion is also required. Using similarity criterion is, on the other hand, useful as objects do not scale too many between consecutive frames.
  • O t could be a new object or it could have been occluded by another object.
  • O t could be a new object or it could have been occluded by another object.
  • color histograms of 0 and O t are stored in order to compare it when a split occurs and a tracker is created to follow the occlusion object ⁇ 3 ⁇ 4.
  • nonliving object is indexed with a notation (Owner Index.Object Index For The Owner).
  • the number before dot shows the nonliving object's owner's index and the number after the dot shows the index of nonliving object.
  • 1.1 is the object belonging to person 1
  • 2.1 is the object belonging to person 2.
  • Abandoned object detection is the main aim of this invention. Integration of the abandoned object detection into the tracking system may allow the person who leave luggage unattended to be tracked and detected. This method is successful to identify the owner of the abandoned object if a person who leave the luggage will stay near it until it is detected as an abandoned object. However, when the luggage is left and the owner of the luggage exits from the field of view, it wouldn't be possible to find the owner without making some extensions to the system. Therefore, association of living and nonliving objects is essential and necessary as it allows finding the owner of unattended luggage.
  • the nonliving object is detected as an abandoned object when its owner leaves the field of view and the alarm is set off after N frames passed. The alarm is removed immediately when the nonliving object is removed. To prevent false alarms in the case of merging of objects, the object's owner is checked whether it is occluded and formed a new object or not. ( Figure 9).
  • Figure 9 and its frames involve Person 3 leaving her backpack (object 3.1) on the floor. After it is detected as an abandoned item, temporary occlusions because of moving persons 5 and 6 do not cause the system to fail. The alarm is raised (Frame #556) after the person owning the backpack leaves.
  • thermal and visible band cameras To track living and nonliving objects and detect abandoned nonliving objects, firstly, images captured from thermal and visible cameras are registered. Both thermal and visible band cameras should be adjusted properly to capture a similar field of view. However, it is not practically possible to capture exactly the same field of view (FOV) for both thermal and visible band cameras since these cameras have different parameters (such as different sensor types and lenses). Therefore, a crop operation is performed for both thermal and visible band frames to set almost same FOV for both thermal and visible images. Then, homography is performed manually by selecting reference points in both thermal and visible domain for the image registration. To find corresponding pixels of each pixel, homography matrix is constructed. To obtain homography matrix Eq. (16) and (17) and reference points selected from both thermal and visible images are used.
  • H V ref x T ⁇ ⁇ (17)
  • v ref is the reference point matrix for visible domain
  • T ref is the reference point matrix for thermal domain
  • H is the homography matrix for registration. The more pixels are selected, the better registration results can be obtained. In this invention, 20 reference points are selected for each dataset. Once the capture and homography parameters are obtained, these parameters can be used without changing as long as the camera positions are not changed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a tracking method, which is capable of individual tracking of multiple objects (such as people and their belongings) and an abandoned object detection method is proposed. Multiple objects are tracked using the proposed method. In addition to visible band, thermal images are also used and these two modalities are fused to track people and the objects they carry separately using their heat signatures. By using the information coming from different modalities, trajectories of the objects are found, ownership information for nonliving object is determined and abandoned objects are detected. Better tracking performance is also achieved compared to using single modality. We use adaptive background modeling and local intensity operation in association with mean-shift tracking for fully automatic tracking. Trackers are refreshed to resolve the possible problems which may occur due to changes in object's size, shape and to handle occlusion, split and to detect newly emerging objects as well as objects that leave the scene.

Description

A METHOD FOR INDIVIDUAL TRACKING OF MULTIPLE OBJECTS
Technical Field
The present invention relates to a tracking method, which is capable of individual tracking of multiple objects (such as people and their belongings) and an abandoned object detection method is proposed based on individual tracking of objects.
Prior Art
Tracking of objects and detection of packages that are left unattended in public spaces such as shopping malls and airports is a security concern and important to prevent possible threats. To control the security of environment, surveillance operators generally watch a high number of cameras simultaneously. However, this is a challenging and labor intensive task. Additionally, these systems are left unattended at certain times which results in security lapses and may cause catastrophic events. Therefore, it is crucial to have automated detection systems aiding operators in place to detect suspicious behavior and unattended suspicious items on time.
In recent years, several studies have been done to detect abandoned items automatically by using computer assisted systems. In such systems, the most important issue is attaining low false alarm rates while not missing the real alarms, as false alarms might render the system ineffective by causing the operators, to ignore these alarms. Hence it is important to prevent false alarms due to stationary living objects and normal behavior of people.
US2004120581A1 numbered patent application can be shown as an alternative method to eliminate the false alarms. It uses a motion pattern database to analyze motions of inside the movie.
CA2640931A1, RU2368952, US2004120581A1,US2009010493A1, US2010266159A1 and WO2008078112A1 are numbers of related patent applications, but they are not considered to be of particular relevance to this invention. Brief Description of the Invention
Multiple objects are tracked using the proposed method. In addition to visible band, thermal images are also used and these two modalities are fused to track people and the objects they carry separately using their heat signatures. By using the information coming from different modalities, trajectories of the objects are found, owner of nonliving object is determined and abandoned objects are detected. Tracking performance is also improved compared to using single modality, as different modalities provide distinctive information. We use adaptive background modeling and local intensity operation in association with mean- shift tracking for fully automatic tracking. Trackers are refreshed to resolve possible problems which may occur due to changes in object's size, shape and to handle occlusion, split and to detect newly emerging objects as well as objects that leave the scene.
Detailed Description of the Invention
In order to attain the objects of the invention, the tracking method is illustrated in the attached figures, wherein;
Figure 1 - is block diagram of the proposed tracking method's object discrimination and object tracking parts.
Figure 2 a - is visible band Image
Figure 2 b - is result of background subtraction
Figure 2 c - is result of noise Removal
Figure 3a - is an unprocessed thermal image.
Figure 3b - is a segmentation results for image of 3a.
Figure 4a - is an example of visible band image.
Figure 4b - is a thermal form of an image 4a.
Figure 4c - is an image 4a of object discrimination result with error.
Figure 4d - is segmented living object of image 4a.
Figure 4e - is segmented living non-object of image 4a.
Figure 5 - is Improved, Adaptive Mean Shift Tracking method.
Figure 6 (n-1) - is non-occlusion detected form of analyzed image.
Figure 6 (n) - is occlusion detected form of analyzed image. Figure 7 - is the correspondence based object matching after re-initialization of trackers.
Figure 8 Frame # 76 - is an example of association of objects with their owners.
Figure 8 Frame # 82 - is an example of association of objects with their owners.
Figure 8 Frame # 95 - is an example of association of objects with their owners.
Figure 8 Frame # 106 - is an example of association of objects with their owners.
Figure 8 Frame # 109 - is an example of association of objects with their owners.
Figure 8 Frame # 125 - is an example of association of objects with their owners.
Figure 9 Frame # 168 - is an example of detection of an abandoned item.
Figure 9 Frame # 199 - is an example of detection of an abandoned item.
Figure 9 Frame # 334 - is an example of detection of an abandoned item.
Figure 9 Frame # 415 - is an example of detection of an abandoned item.
Figure 9 Frame # 427 - is an example of detection of an abandoned item.
Figure 9 Frame # 475 - is an example of detection of an abandoned item.
Figure 9 Frame # 503 - is an example of detection of an abandoned item.
Figure 9 Frame # 531 - is an example of detection of an abandoned item.
Figure 9 Frame # 544 - is an example of detection of an abandoned item.
Figure 9 Frame # 556 - is an example of detection of an abandoned item.
Figure 9 Frame # 562 - is an example of detection of an abandoned item.
Figure 9 Frame # 589 - is an example of detection of an abandoned item.
The inventive tracking method comprises two main image-analyzing groups of steps.
The aim of the first group of steps is discrimination of living and non living objects. The second group of steps is related with object tracking. Discrimination starts with background subtraction (101), which is applied to the visible band image. Then, connected component analysis is utilized to remove the noise (102). On the other hand, local intensity operation is applied to thermal image (103) and the result of this operation is post-processed to complete (104) and close possible holes which might be formed after local intensity operation.
The second group of object discrimination (105) step is the fusion step which uses both modalities. After this step, a rule based method and connected component analysis are used to extract objects and classify them as living or nonliving. Finally, each object (living and/or nonliving) is tracked using our improved, adaptive mean shift tracking algorithm (106). While tracking objects, living and nonliving objects are also associated with each other and owner/carried object relation is set for tracked objects (107). Abandoning of an object was detected by using these relations and tracking the objects separately (108). These steps are described below in details.
Improved Adaptive Gaussian model (Z. Zivkovic, "Improved adaptive Gaussian mixture model for background subtraction", Pattern Recognition, 2004. ICPR Proceedings of the 17th International Conference on, vol.2, pp. 28-3, 23-26 Aug. 2004) is a technique to produce reliable background information while not being computationally complex. This method is a pixel-based method and each pixel is defined as a mixture of Gaussians with M components as follows:
Figure imgf000005_0001
where x(t is the value of pixel at time t, XT ={x(t),.- -, x(t_T)} is the training set at time t while T is the time period, BG is the background, FG is the foreground, pi, μ2 , MM and Oi , σ2 σΜ are the estimates of mean and variance for the Gaussian components respectively. Πι, n2, , nM are the weight values that are nonnegative and summation is equal to 1. The parameters of the model should be updated with new samples to adapt to the changes in the background. Equations (2), (3) and (4) show how Gaussian model parameters are being updated.
Figure imgf000005_0002
where 5m = x(t) - pm , o(t) is an ownership, and a is learning parameter, approximately a = 1/T, T is the time period. For each component, m is set to one if its "close" component to largest nm and the others are set to 0. New sample is "close" to component if the Mahalanobis distance between them is less than four standard deviations. Square distance from mth component can be calculated by using Eq. (5). If the new sample is close to the component, the new sample belongs to 99% confidence level and can be determined as a part of foreground.
Firstly, background subtraction is applied to extract a stationary background image. Any background subtraction algorithm can be used in this stage, as long as the stationary background image allows discrimination of foreground objects.
After background subtraction, we apply connected component analysis to detect and remove the noise. To eliminate the noise, the bounding box of the object, the number of pixels that each connected component has and the area of the object's bounding box are found. Then, density of each object is calculated by using Eq. 6.
D = N/A rect (6) where D is the density of object, N is the number of pixels that object has and Arect is the area of the bounding rectangle.
After finding the density of objects, each connected component is classified as noise and removed from the image if its density is smaller than the density threshold and the number of pixels that belongs to this object is smaller than the maximum number of pixel threshold. An example result of background subtraction and noise removal step is shown in Figure 2.
To discriminate people and their belongings, heat signature information is used. Since thermal domain images are constructed from energy emitted by objects and living objects emit more energy compared to nonliving objects, pixels of living objects appear brighter than pixels of nonliving objects (in white-hot setting). The invention uses local intensity operation (LIO) (R. Heriansyah and S.A.R. Abu-Bakar, "Defect detection in thermal image for nondestructive evaluation of petrochemical equipments", NDT & E International, Vol. 42, Issue 8, pp. 729-740, Dec. 2009.) for defect detection in thermal images. We also utilized this operator, which brightens the bright pixels and darkens the dark pixels, in a similar fashion to segment pixels belonging to living objects.
According to this method, I(x,y) is given as a pixel in thermal image written as z0 , and neighbors of it I(x-l,y-l), I(x-l,y), I(x-l,y+l), I(x,y-1), I(x,y+1) , I(x+l,y-l), I(x+l,y), I(x+l,y+l) are written as z z2, z3, z4, zs, z5/ z7, z8 respectively. Then, Z will be product of the neighboring pixels:
Figure imgf000007_0001
A new image is created according to Z for each pixel in thermal image by defining intensity brightness operation by using Eq. (8). g(x, y) = Z (8)
where g(x, y) is the pixel value at (x, y) of new image.
After that, these image pixels are normalized to gray-scale range. The normalization process is done by dividing these pixels into the maximum pixel value within the image.
Although this operation increases the brightness of bright pixels and the darkness of the dark pixels to get better result, we segmented this new image by using Mean Absolute Thresholding (MAT). MAT is a simple segmentation mechanism that uses threshold value calculated as follows:
T = round ^ J where T is the threshold value, Imax is the maximum pixel value, Imin is the minimum pixel value.
Examples of algorithm's result are shown in Figure 3.
It has to be noted that, besides living objects, hot objects such as heating systems, radiators or any nonliving objects which are hotter than the environment are also captured brighter than the other objects and segmented as a result of this process. However, as it will be explained in later, since such objects belong to the background in visible image, our method does not discriminate these objects as people and hence, false alarms due to stationary hot objects are prevented.
The algorithm above may not find the object precisely and some gaps may be observed on the object's body due to the clothing. These problems are rectified with postprocessing. To make these objects single piece, it is needed to complete and close these holes in binary images by using some morphological operations. First, objects in binary images (result of local intensity operation, Figure 3b) are completed by hole-filling. Then, these binary objects are closed. The object discrimination step is the fusion step that both thermal and visible band images are used. It is the main step for individual tracking of objects such as people and their belongings.
In this step, the objects coming from result of background subtraction and noise removal (a binary image) in visible data and the objects coming from result of local intensity operation and post processing (a binary image) in thermal data are utilized.
Using the rule given in (Eq. 10) and connected component analysis, objects are extracted and classified as living (people) or nonliving (belongings).
p{ . - ί livin9 object (people), Rv(xf y)≠ QA RT(xfy)≠ 0
— waivi g object (belonging), flp(¾y) ψ Q \iRT(x,y) = 0 where F(x, y) is the fusion result, Rv(x, y) is the pixel value after background subtraction and noise removal in visible data and Rr{x, y) is the pixel value after local intensity operation and post processing in thermal domain. By using this rule, thermal reflection and hot objects such as heating systems and radiators are not classified as living objects and possible false alarms are prevented.
After this operation some discrimination errors may occur especially around the living object due to inaccuracies in registration of thermal and visible band images. To handle these errors, same method which is presented here to remove noise is applied and errors are eliminated. Example results of this step are shown in Figure 4.
The mean shift tracking method is an optimization algorithm based on object representation^. Comaniciu, V. Ramesh, and P. Meer, "Kernel-based object tracking", IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, pp. 564-577, May 2003). It is an iterative scheme, uses a nonparametric kernel and executes until the goal is attained. It basically tries to find an object in the next image frame which is most similar and close to the initialized object (object model) in the current frame. It compares the histogram of the object model and histogram of the candidate object in the next frame. The aim is to maximize the similarity between the two histograms.
At the initialization step, an object model which is aimed to be tracked is selected, bin size, kernel function, size of the kernel and maximum iteration number are determined. Color histogram of the object model is found and the probability density function (pdf) of the object model is calculated as follows: ¾ = c t (ΐ| ι 2)Φ(^έ) - "I (11)
In this equation, k is the kernel function which gives more weight to the pixels at the centre of the model, C is a normalizing constant which provides that sum of the histogram elements is 1, u represents histogram bin and n is the number of pixels in the object model. 8 is the Kronecker delta function and b represents histogram binning function for pixels at location
After defining the target model in the initialization step, a candidate model is constructed. Similar to the target model's pdf, the candidate model's pdf at location y is given by h δΐΜχΰ - (i2) where h is the kernel size which provides the size of the candidate objects.
After defining the pdf of the candidate model, it is compared with the target model's pdf. To compare color based pdfs, generally the metric derived from Bhattacharyya coefficients is used. In Eq. (13) p(u)and q(u) represent the Bhattacharyya coefficients. pi¾,¾ (y)l = ^ p y)qL (13)
The larger p means the more similar the pdfs are. If the candidate model is not similar to the target model then the current search area is shifted. This iteration continues until the result of similarity is less than a threshold or when the iteration number is converged to the predefined number. By applying this method to each video frame, object model can be tracked over time.
Object detection is the first step for object tracking and this could be either manual or automatic. In the literature, many studies assume that the objects which will be tracked are selected manually by an operator. However, if manual initialization is used since new objects could not be tracked when they enter the scene after the initialization frame, it is expected that the operator regularly selects all the new objects, which prevents system to be automatic. On the other hand, when automatic initialization is applied, any new object entering the scene could to be tracked without any need for a human operator. In this invention, a fully automatic system is proposed and for initialization of the objects' bounding boxes results of the object discrimination step are used and for each nonliving or living object a tracker is defined. Additionally, these bounding boxes are used as a mask to decrease the search area of the mean shift tracker. This increases the proposed system's tracking accuracy and performance. Due to the fact that it reduces the search area of the frame, the required number of iterations to find the new position of object model is decreased.
Even though tracking objects by only using the result of object discrimination seems possible, it is not a robust method. When multiple objects are required to be tracked in crowded places and in the presence of occlusions, matching of objects and finding correspondences become difficult. However, applying this method in every frame is not efficient since it requires too much memory to hold the objects and also objects' correspondence objects.
Although using the information coming from object discrimination phase in the tracker initialization step has advantages, it is not sufficient to make the system fully automatic, since it still does not detect the new objects entering the scene or the objects leaving the scene. To solve this and to have a system which is adaptable to change in object's size and shape and to handle inclusion of background information, we reinitialize the trackers by using results of the object discrimination step in regular time intervals. This update mechanism is shown in Figure 5.
To handle the changes in size or shape, we update trackers every 25 frames. To detect new objects as well as objects that leave the scene, numbers of objects in adjacent frames are compared and if those numbers are not equal then trackers are updated. To handle occlusion and split and to detect newly emerging objects; the locations of bounding box of each object are compared. If an intersection exits then trackers are refreshed to handle the inclusion of front objects color. Separate trackers are initialized for each living and nonliving object and the same algorithm is used independent of the object type.
However, as a result of re-initialization, trajectories of objects (object correspondences) are lost. To overcome this, we also find the correspondence of objects (living and nonliving) after each re-initialization step. To establish the matching between objects and provide the correspondence in frames, we adapted correspondence based tracking method (Y. Dedeoglu, "Moving Object Detection, Tracking and Classification for Smart Video Surveillance", Master's thesis, Bilkent University, Department of Computer Engineering, Turkey, pp. 41-49, August 2004) to our method and used object's size, centre of mass, bounding box and colour histogram features. Firstly, we check whether an object 0έ is close and similar to an object op which exists in previous frame. Closeness is defined as the distance between the centre of mass of these two objects (θέ and op) and Euclidean formula (Eq. 14) is used to calculate this distance. Similarity, on the other hand, is calculated by using the size ratio of the objects (Eq. 15). If distance between object ot and object op is smaller than a distance threshold and the size ratio of objects oi and
Figure imgf000011_0001
smaller than a size threshold, we define object oi as a corresponding object for op. Using closeness is a successful criterion since the displacement of an object between adjacent frames should be small. However, it is not a sufficient criterion since objects that are close to each other in previous frame may interfere and the matching might be incorrect. Therefore, similarity criterion is also required. Using similarity criterion is, on the other hand, useful as objects do not scale too many between consecutive frames.
Figure imgf000011_0002
where rf(£>p Oi)is Euclidean distance, A- represents x and y components of centre of mass of objects Oi and opl tdisumee is distance threshold. ifSP > Si t —≤ tsiss otherwise— < tsise (15) where Sp is size of object Op and 5{ is size of ø·.
If object Oi does not match object op then there are two possibilities: Ot could be a new object or it could have been occluded by another object. To check whether an occlusion exists or not, we first compare the number of objects; if there is a decrease in object numbers, then an occlusion is possible. If objecto/s bounding box is overlapping with op and Ot, then it is highly possible that 0 and0t are occluded generate object Οέ (Figure 6).
In such a case, color histograms of 0 and Ot are stored in order to compare it when a split occurs and a tracker is created to follow the occlusion object <¾.
Except these, for each object that enters the scene, we check whether its bounding box is overlapping with an occluded object or not. If its bounding box intersects with an occluded object then we compare its histogram probability density function (pdf) with occluded object's histogram pdf in order to handle a possible split. To compare histogram pdfs, we used Bhattacharyya coefficients (Eq. 13), similar to mean shift tracking. If there is a similarity, in other words, if the distance between pdfs is smaller than a threshold, then object is matched with the occluded object and that occluded object's histogram is removed.
If an object does not match with an object Op or there is no occlusion or split, then this object is assumed to be a new object and a new tracker is defined to track it. The mechanism to establish the matching of objects when a re-initialization occurs is given in Figure 7.
While tracking objects, ownership information is extracted and living (people) and nonliving objects (belongings) are associated with each other. To find the ownership, the closeness criterion is used. To calculate the distance between each nonliving object and living object, Euclidean distance is utilized (Eq. 14) and the nonliving object is assigned to the nearest living object. While determining the ownership, it is assumed that object is not handed over to another person. Therefore, once a nonliving object is appointed to a living object, it is no longer appointed to another living object. On the other hand, if a nonliving object is associated with a living object while that living object is occluded by another living object since a wrong association is strongly possible, it is assigned to the living object after split occurs. If these objects form a new object by merging, then in the next update the nonliving object is associated with the merged object. Example results of this association step are given in Figure 8.
As it is seen in Figure 8, bounding boxes of nonliving objects and living objects are shown in different boxes. To denote the association of an object with a person, nonliving object is indexed with a notation (Owner Index.Object Index For The Owner). The number before dot shows the nonliving object's owner's index and the number after the dot shows the index of nonliving object. For example in Figure 8, 1.1 is the object belonging to person 1 and 2.1 is the object belonging to person 2.
Abandoned object detection is the main aim of this invention. Integration of the abandoned object detection into the tracking system may allow the person who leave luggage unattended to be tracked and detected. This method is successful to identify the owner of the abandoned object if a person who leave the luggage will stay near it until it is detected as an abandoned object. However, when the luggage is left and the owner of the luggage exits from the field of view, it wouldn't be possible to find the owner without making some extensions to the system. Therefore, association of living and nonliving objects is essential and necessary as it allows finding the owner of unattended luggage. In this invention, the nonliving object is detected as an abandoned object when its owner leaves the field of view and the alarm is set off after N frames passed. The alarm is removed immediately when the nonliving object is removed. To prevent false alarms in the case of merging of objects, the object's owner is checked whether it is occluded and formed a new object or not. (Figure 9).
Figure 9 and its frames involve Person 3 leaving her backpack (object 3.1) on the floor. After it is detected as an abandoned item, temporary occlusions because of moving persons 5 and 6 do not cause the system to fail. The alarm is raised (Frame #556) after the person owning the backpack leaves.
To track living and nonliving objects and detect abandoned nonliving objects, firstly, images captured from thermal and visible cameras are registered. Both thermal and visible band cameras should be adjusted properly to capture a similar field of view. However, it is not practically possible to capture exactly the same field of view (FOV) for both thermal and visible band cameras since these cameras have different parameters (such as different sensor types and lenses). Therefore, a crop operation is performed for both thermal and visible band frames to set almost same FOV for both thermal and visible images. Then, homography is performed manually by selecting reference points in both thermal and visible domain for the image registration. To find corresponding pixels of each pixel, homography matrix is constructed. To obtain homography matrix Eq. (16) and (17) and reference points selected from both thermal and visible images are used.
Kef = H x Tref (16)
H = Vref x T~} (17) where v ref is the reference point matrix for visible domain, Tref is the reference point matrix for thermal domain, and H is the homography matrix for registration. The more pixels are selected, the better registration results can be obtained. In this invention, 20 reference points are selected for each dataset. Once the capture and homography parameters are obtained, these parameters can be used without changing as long as the camera positions are not changed.
While performing background modeling the number of Gaussians was chosen as 4, background model learning rate a was taken as 0.0002, threshold on the squared Mahalanobis distance was taken 16 which mean 4 standard deviations in order to provide 99% confidence and initial standard deviation was taken as 11. To remove noise the minimum object density not to classify a connected component as a noise was chosen as 0.4 and number of pixel threshold was used as 1000. YCrCb colour space is preferred as in this color space luminance and chrominance layers are represented separately. For mean shift tracking, three dimensional (Y, Cr, Cb) histogram is used, histogram bin is taken as 32x32x32, the number of mean-shift iterations to find the new location of trackers in the following frame is taken as one. Distance threshold while finding the correspondence of object is taken as 25 pixels, and size threshold is defined above. The period of time to alert the system in above is chosen as 25 frames (1 sec).

Claims

1. A living and non living object tracking method characterized in that;
- living and non living objects are discriminated with background subtraction (101) on visible band image,
- for removing the noise of the image, component analysis is utilized on background subtracted visible band image (102),
- local intensity operation is applied to thermal image of the same frame of visible band image (103),
- result of step 103 is post-processed to close possible holes which might be formed after local intensity operation (104),
- object discrimination is applied to differentiate, extract and classify them as living and nonliving objects with the use of visible and thermal band image data of an object (105),
- each living and/or nonliving object is tracked using a tracking algorithm (106),
- living and nonliving objects are associated with each other and owner/carried object relation is set for tracked objects (107),
- abandonment of an object is detected by using previously set relations and the objects are tracked separately (108) to find separated, dropped or left on living objects by living objects.
2. A living and nonliving object tracking method according to claim 1 and characterized in that Improved Adaptive Gaussian model is used as a method of background subtraction.
3. A living and nonliving object tracking method according to any of the preceding claims and characterized by the discrimination of the people with their heat signature information.
4. A living and non living object tracking method according to any of the preceding claims and characterized in that before tracking living and nonliving objects and detecting abandoned nonliving objects, both thermal and visible band cameras are adjusted properly to capture a similar field of view by adjusting the view angle and homography, the view of these two camera images and produce the homography matrix.
PCT/TR2011/000082 2011-04-13 2011-04-13 A method for individual tracking of multiple objects Ceased WO2012141663A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/TR2011/000082 WO2012141663A1 (en) 2011-04-13 2011-04-13 A method for individual tracking of multiple objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/TR2011/000082 WO2012141663A1 (en) 2011-04-13 2011-04-13 A method for individual tracking of multiple objects

Publications (1)

Publication Number Publication Date
WO2012141663A1 true WO2012141663A1 (en) 2012-10-18

Family

ID=44627134

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2011/000082 Ceased WO2012141663A1 (en) 2011-04-13 2011-04-13 A method for individual tracking of multiple objects

Country Status (1)

Country Link
WO (1) WO2012141663A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825525A (en) * 2016-03-16 2016-08-03 中山大学 TLD target tracking method and device based on Mean-shift model optimization
WO2017027212A1 (en) * 2015-08-13 2017-02-16 Microsoft Technology Licensing, Llc Machine vision feature-tracking system
CN109460077A (en) * 2018-11-19 2019-03-12 深圳博为教育科技有限公司 A kind of automatic tracking method, automatic tracking device and automatic tracking system
US10558886B2 (en) 2017-11-15 2020-02-11 International Business Machines Corporation Template fusion system and method
CN111797727A (en) * 2020-06-18 2020-10-20 浙江大华技术股份有限公司 Method and device for detecting road surface sprinkled object and storage medium
CN111913435A (en) * 2020-07-30 2020-11-10 浙江科技学院 Single/multi-target key point positioning method based on stacked hourglass network
CN115497056A (en) * 2022-11-21 2022-12-20 南京华苏科技有限公司 Method for detecting lost articles in region based on deep learning
EP4250217A1 (en) * 2022-03-22 2023-09-27 Fujifilm Business Innovation Corp. Information processing apparatus, program, and information processing method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120581A1 (en) 2002-08-27 2004-06-24 Ozer I. Burak Method and apparatus for automated video activity analysis
WO2008078112A1 (en) 2006-12-23 2008-07-03 Thruvision Limited Environmental conditioning apparatus, a chamber for use thereof and a related detection method and apparatus
US20090010493A1 (en) 2007-07-03 2009-01-08 Pivotal Vision, Llc Motion-Validating Remote Monitoring System
CA2640931A1 (en) 2007-10-15 2009-04-15 Lockheed Martin Corporation Method of object recognition in image data using combined edge magnitude and edge direction analysis techniques
RU2368952C2 (en) 2007-07-06 2009-09-27 Открытое акционерное общество "Научно-конструкторское бюро вычислительных систем" Method of inputting surveillance object information into tracking system computer and device to this end (versions)
US20100182433A1 (en) * 2007-10-17 2010-07-22 Hitachi Kokusai Electric, Inc. Object detection system
US20100266159A1 (en) 2009-04-21 2010-10-21 Nec Soft, Ltd. Human tracking apparatus, human tracking method, and human tracking processing program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120581A1 (en) 2002-08-27 2004-06-24 Ozer I. Burak Method and apparatus for automated video activity analysis
WO2008078112A1 (en) 2006-12-23 2008-07-03 Thruvision Limited Environmental conditioning apparatus, a chamber for use thereof and a related detection method and apparatus
US20090010493A1 (en) 2007-07-03 2009-01-08 Pivotal Vision, Llc Motion-Validating Remote Monitoring System
RU2368952C2 (en) 2007-07-06 2009-09-27 Открытое акционерное общество "Научно-конструкторское бюро вычислительных систем" Method of inputting surveillance object information into tracking system computer and device to this end (versions)
CA2640931A1 (en) 2007-10-15 2009-04-15 Lockheed Martin Corporation Method of object recognition in image data using combined edge magnitude and edge direction analysis techniques
US20100182433A1 (en) * 2007-10-17 2010-07-22 Hitachi Kokusai Electric, Inc. Object detection system
US20100266159A1 (en) 2009-04-21 2010-10-21 Nec Soft, Ltd. Human tracking apparatus, human tracking method, and human tracking processing program

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
AHMET YIGIT ET AL: "Abandoned object detection using thermal and visible band image fusion", SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2010 IEEE 18TH, IEEE, PISCATAWAY, NJ, USA, 22 April 2010 (2010-04-22), pages 617 - 620, XP031815555, ISBN: 978-1-4244-9672-3 *
CIGDEM BEYAN ET AL: "Fusion of thermal- and visible-band video for abandoned object detection", JOURNAL OF ELECTRONIC IMAGING, vol. 20, no. 3, 1 January 2011 (2011-01-01), pages 033001, XP055011689, ISSN: 1017-9909, DOI: 10.1117/1.3602204 *
CIGDEM BEYAN ET AL: "Mean-shift tracking for surveillance applications using thermal and visible band data fusion", PROCEEDINGS OF SPIE, 1 January 2011 (2011-01-01), pages 802010 - 802010-13, XP055011687, ISSN: 0277-786X, DOI: 10.1117/12.882838 *
D. COMANICIU, V. RAMESH, P. MEER: "Kernel-based object tracking", IEEE TRANS. PATTERN ANAL. MACH. INTELL., vol. 25, May 2003 (2003-05-01), pages 564 - 577
R. HERIANSYAH, S.A.R. ABU-BAKAR: "Defect detection in thermal image for nondestructive evaluation of petrochemical equipments", NDT & E INTEMATIONAL, vol. 42, no. 8, December 2009 (2009-12-01), pages 729 - 740, XP026546634, DOI: doi:10.1016/j.ndteint.2009.06.008
Y. DEDEOGLU: "Master's thesis", August 2004, BILKENT UNIVERSITY, article "Moving Object Detection, Tracking and Classification for Smart Video Surveillance", pages: 41 - 49
Z. ZIVKOVIC: "Improved adaptive Gaussian mixture model for background subtraction", PATTERN RECOGNITION, 2004. ICPR PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON, vol. 2, August 2004 (2004-08-01), pages 28 - 3,23-26

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017027212A1 (en) * 2015-08-13 2017-02-16 Microsoft Technology Licensing, Llc Machine vision feature-tracking system
CN106469443A (en) * 2015-08-13 2017-03-01 微软技术许可有限责任公司 Machine vision feature tracking systems
CN106469443B (en) * 2015-08-13 2020-01-21 微软技术许可有限责任公司 Machine Vision Feature Tracking System
CN105825525A (en) * 2016-03-16 2016-08-03 中山大学 TLD target tracking method and device based on Mean-shift model optimization
US10558886B2 (en) 2017-11-15 2020-02-11 International Business Machines Corporation Template fusion system and method
CN109460077A (en) * 2018-11-19 2019-03-12 深圳博为教育科技有限公司 A kind of automatic tracking method, automatic tracking device and automatic tracking system
CN109460077B (en) * 2018-11-19 2022-05-17 深圳博为教育科技有限公司 Automatic tracking method, automatic tracking equipment and automatic tracking system
CN111797727A (en) * 2020-06-18 2020-10-20 浙江大华技术股份有限公司 Method and device for detecting road surface sprinkled object and storage medium
CN111797727B (en) * 2020-06-18 2023-04-07 浙江大华技术股份有限公司 Method and device for detecting road surface sprinkled object and storage medium
CN111913435A (en) * 2020-07-30 2020-11-10 浙江科技学院 Single/multi-target key point positioning method based on stacked hourglass network
EP4250217A1 (en) * 2022-03-22 2023-09-27 Fujifilm Business Innovation Corp. Information processing apparatus, program, and information processing method
CN115497056A (en) * 2022-11-21 2022-12-20 南京华苏科技有限公司 Method for detecting lost articles in region based on deep learning

Similar Documents

Publication Publication Date Title
Liao et al. A localized approach to abandoned luggage detection with foreground-mask sampling
Hou et al. People counting and human detection in a challenging situation
KR101764845B1 (en) A video surveillance apparatus for removing overlap and tracking multiple moving objects and method thereof
AU2014240213B2 (en) System and Method for object re-identification
Ogale A survey of techniques for human detection from video
WO2012141663A1 (en) A method for individual tracking of multiple objects
Di Lascio et al. A real time algorithm for people tracking using contextual reasoning
Maddalena et al. People counting by learning their appearance in a multi-view camera environment
Serratosa et al. A probabilistic integrated object recognition and tracking framework
Patel et al. Moving object tracking techniques: A critical review
Din et al. Abandoned object detection using frame differencing and background subtraction
Mirabi et al. People tracking in outdoor environment using Kalman filter
Zhao et al. APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking
Russel et al. Ownership of abandoned object detection by integrating carried object recognition and context sensing
Chang et al. Localized detection of abandoned luggage
Tavanai et al. Carried object detection and tracking using geometric shape models and spatio-temporal consistency
Boufama et al. Tracking multiple people in the context of video surveillance
Gwak et al. Viewpoint invariant person re-identification for global multi-object tracking with non-overlapping cameras
Fazli et al. Multiple object tracking using improved GMM-based motion segmentation
Yao et al. Multi-Person Bayesian Tracking with Multiple Cameras.
Lim et al. Object tracking system using a VSW algorithm based on color and point features
Beyan et al. Mean-shift tracking for surveillance applications using thermal and visible band data fusion
Clapés et al. User identification and object recognition in clutter scenes based on RGB-Depth analysis
Singh et al. An efficient shadow removal method using HSV color space for video surveillance
Jing-Ying et al. Localized detection of abandoned luggage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11725995

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2013/02821

Country of ref document: TR

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11725995

Country of ref document: EP

Kind code of ref document: A1