[go: up one dir, main page]

MXPA98009084A - Method for detecting moving cast shadows object segmentation. - Google Patents

Method for detecting moving cast shadows object segmentation.

Info

Publication number
MXPA98009084A
MXPA98009084A MX9809084A MX9809084A MXPA98009084A MX PA98009084 A MXPA98009084 A MX PA98009084A MX 9809084 A MX9809084 A MX 9809084A MX 9809084 A MX9809084 A MX 9809084A MX PA98009084 A MXPA98009084 A MX PA98009084A
Authority
MX
Mexico
Prior art keywords
image
pixel
change detection
mask
mobile
Prior art date
Application number
MX9809084A
Other languages
Spanish (es)
Inventor
Ostermann Joern
Original Assignee
At & T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by At & T Corp filed Critical At & T Corp
Publication of MXPA98009084A publication Critical patent/MXPA98009084A/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

An image region changed by a moving cast shadow from a first image to a second image is detected. For each pixel within a change detection mask (a binary mask indicating image areas of difference between the first image and the second image) and a set of neighboring pixels, the following steps are performed. Whether the pixel and the set of neighboring pixels include a static background edge or no edge is determined. Whether the pixel and the set of neighboring pixels include an edge with a spatial signal step width greater than a threshold is determined. Whether the pixel and the set of neighboring pixels have a uniform temporal change of illumination is determined. The pixel is classified as being changed by a moving cast shadow when at least two of the above-mentioned determinations succeed for the pixel or when at least one determination from the above-mentions determinations succeed for a majority of the set of neighboring pixels. In another embodiment of the present invention, d etected image regions are used to estimate the two-dimensional shape of moving objects in image sequences even in the presence of moving case shadows. In another embodiment, detected image regions are temporarily integrated to represent the total of moving cast shadows.

Description

- i - METHOD FOR DETECTING PROJECTED SHADOWS M nr.iag pfrflft SEGMENTATION OF OBJECTS ANTKCJiiLÜENTES DE LA] CON The present invention relates generally to the processing of video images. More specifically, the present invention relates to the detection of moving projected shadows within video image sequences so that segmentation of objects can be performed. Shadows are presented in a wide variety of scenes, including a wide variety of scenes in sequential video images. If the shadows are identified in video images, they can provide a large amount of information about the shape, relative position and surface characteristics of the objects in a scene. Although humans can easily distinguish shadows from objects, the identification of shadows by computers is more difficult, for example, the identification of shadows by computers involves segmentation of an object, the separation of an object, its shadow and the background which must have portions stationary and mobile portions. The identification of shadows by computers is even more difficult with projected shadows moving over a video image to the next video image. Methods and methods have been developed REF: 28752 -. - 2 - known systems to try to detect moving projected shadows. For example, a known method performs shadow detection with a static camera that provides the image by subdividing an image into blocks and calculating the luminance contrast for each block; the blocks are identified as moving projected shadows where the luminance contrast changes from pixel to pixel from a block. See Skitstad, K. and Jain, R., "Illumination Independent change Detection for Real World Image Sequences," Computer Vision, Graphics, and Image Processing 46, 387-99 (1989). However, this known method suffers from drawbacks. Specifically, it can not distinguish a moving projected shadow from a moving object when the object lacks texture, that is, it lacks defined contrast within the object.
BRIEF DESCRIPTION OF AI An image region changed by a shadow projected moving from a first image to a second image is detected. For each pixel within a change detection mask (a binary mask indicating difference image areas between the first image and the second image), and a set of neighboring pixels, the following steps are performed. It is determined whether the pixel and the set of neighboring pixels include a static background edge or there is no edge. It is determined whether the pixel and the set of neighboring pixels include an edge with a spatial signal stage of width greater than a threshold. It is determined whether the pix l and the set of neighboring pixels have a uniform temporal change of illumination. The pixel is classified as changed by a moving projected shadow when at least two of the aforementioned determinations are successful for the pixel or for when at least one is successful for at least one determination of the aforementioned determinations for the most part of the set of neighboring pixels. In another embodiment of the present invention, image regions detected are used to estimate the two-dimensional form of moving objects in image sequence even in the presence of moving projected shadows. In another embodiment, the detected image regions are temporarily integrated to represent the total of the projected moving shadows.
BRIEF DESCRIPTION OF WS DRAWINGS Figure 1 shows a possible configuration for the formation of a projected or thrown shadow, according to an embodiment of the present invention.
Figure 2 illustrates a process by which the bottom edges can be detected, according to one embodiment of the present invention. Figure 3 illustrates a process for testing the spatial constancy of the frame ratio within an image by evaluating local spatial variance,. according to one embodiment of the present invention. re 4 illustrates a process by which penumbras can be detected, according to one embodiment of the present invention. re 5 illustrates a luminance stage model for a luminance stage in an image perpendicular to a shadow contour, according to one embodiment of the present invention. re 6 illustrates the application of heuristic rules to determine the image regions changed by a moving projected shadow, according to one embodiment of the present invention. res 7 and 8 illustrate a process by which an estimation of two-dimensional shape of mobile objects in an image sequence can be applied to image regions changed by moving projected shadows, in accordance with one embodiment of the present invention.
Figure 1 shows a possible configuration for the formation of a projected or thrown shadow, according to an embodiment of the present invention. The object 100 is illuminated by a light source 110. The projected shadow 130 is projected onto the background 120. The projected shadow 130 includes a center 131 of shade and shadows 132 of shadow. The shadow shadows 132 are smooth transitions from dark to light where part of the light from the light source 110 reaches the bottom 120. The appearance of the projected shadow 130 can be recorded by a video camera 140. The shadows 130 projected on the background 120, which include projected shadows which are movable (referred to herein as "moving projected shadows"), can be detected by the video camera 140 which collects consecutive images and by a processor (not shown) which analyzes the video images. The methods described below can be used by the processor to analyze the video images. Note that the processor does not need to be directly coupled to the video camera 140; instead, the video images can be detected by the video camera 140 and subsequently analyzed by the processor, or the video images can be analyzed in near real time by the processor as each image is detected by the camera 140 of video. In one implementation, the video images are detected by the video camera 140, analyzed by the processor and sent for transmission over a telecommunications network. The appearance of the projected shadow in an image of a video camera 140 can be described by an image signal model. This model describes the image luminance, S ^, as follows: where k is the instance of time; x, y is the two-dimensional position of the image; Ek (x, y) is the irradiance of the surface of object 100; and tk (x, y) is the reflectance of the surface of object 100. The irradiance Ek (x, y) is the amount of light energy per surface area of object it receives. The irradiance Ek (x, y) is a function of the direction, L, of the light source, the intensities cp and ca of the source 110 of light and ambient light, respectively, and the surface of the normal object, N , according to the following equation: CA + Cp COS (N (x, y), L) if illuminated Ek (x, y) =. { CA + K (XfY) Cp COS_ (N (x, y), L) if it is gloom G if it is umbra In equation 2, the term k (x, y) which has a value between 0 and 1, describes the transition within the penumbra 132 and depends on the geometry of the scene. The intensity Cp of the light source is proportional to I / r2, where r is the distance between the object 100 and the light source 110. In the image signal model of equation 1, the photometric distortions of the respective projection are disregarded. Note also that gamma non-textileness (ie, the law of exponential power used to approximate the output magnitude curve versus the input magnitude over the intérés region) of a video camera is not considered. Finally, the assumption is made that the color of the ambient light is the same as the color of a point light source. c. The following assumptions are made: the video camera 140 and the background 120 are static; the bottom 120 is locally flat and the position of the light source 110 is distant from the bottom 120; the light source 110 has an extended shape compared to the distance of the object 100 in motion; and the intensity, cp, of the light source 110 is high. Accordingly, the projected shadows 130 on the background 120 will be part of a change detection mask. A change detection mask indicates those image regions that have a large frame difference between the previous and the current image. In other words, the change detection mask indicates the difference in luminance on a base per pixel between two consecutive image frames.To illustrate a consequence of the assumption that the intensity cp of the light source 110 is high, consider a pixel at the position x, and show a part of the. background 120. Because the reflectance of background 120 is static and does not change, tk (x, y) = tk + 1 (x, y) is retained. When the pixel is in the center 131 of the shadow at time k in a shaded shadow 132 at time k + 1, the difference in the image luminance between frames will be large, as illustrated by the following equation: Sjtrt (??) "¾ (x * Y) = t * (x, y) k (x, y) qpcos_ (N (x, y), L) = 0 All the pixels that are part of the change detection mask are evaluated by the following three criteria: (1) detection of static background edges, (2) detection. of uniform lighting changes, and (3) penumbra detection. The results of these three criteria can be combined to a binary mask for background regions 120 that have been changed by the projected mobile shadows 13.0. Finally, for each image of an image sequence, the projected moving shadows can be detected by temporal integration of regions changed by the moving projected shadows. Each one of the aspects will be discussed later. 1. Detection of Static Bottom Edges To satisfy the assumption that the video camera 140 and the background 120 are static, numerous configurations are possible. First, any movement of the video camera 140 and the background 120 can be restricted. Alternatively, the previous image as sk can be compensated for movement with respect to the subsequent images sk + l · Figure 2 illustrates a process by which static bottom edges can be detected according to one embodiment of the present invention. For example, in textured background regions within the change detection mask, the movable projected shadows 130 can be distinguished from the mobile objects 100 by their static background edges, because neither the background 120 nor the camera 140 are in movement, or its movement is compensated. In step 200, a first set of edges is detected in a first image. In step 210, a second set of edges is detected in a second image. In step 220, the first image is subtracted from the second image to produce the frame difference.
In step 230, a high frequency filter is applied to the frame difference to produce a high frequency frame difference. In step 240, the high frequency frame difference is compared to a high frequency threshold. The threshold for high frequency activity can be calculated adaptively for high frequency activities of frame difference outside the detection mask of change. For each edge from the first and second edge set from the first and second images, respectively, steps 250 and 260 are performed. In conditional step 250, a determination is made for each pixel of each edge and determining whether the difference of High frequency frame is below the threshold. When the high frequency frame difference is below the umbrage, that edge is classified as a static edge in step 260. The static edges can be used to detect moving projected shadows in a non-moving background within a detection mask that has changed. As illustrated in equation 1, the static edges in an image sk (x, y) can be due to either the reflectance t (x, y) or the irradiance Ek (x, y). The static edges caused by discontinuities in the reflectance suggest a texture in the static background 120. The static edges caused by discontinuities in the. irradiates suggest. discontinuous shading at the edges of a three-dimensional shape of a static background 120. Therefore, the static edges within the change detection mask suggest the possibility of a mask 130 projected movable on a static background 120. 2. Uniform Change Detection of Ilunri narón ón Note the following assumptions: first, the assumption is made that the background 120 is locally flat and the assumption is made that the light source 110 is distant from the background 120. Consequently, the irradiance, as calculated by the Equation 2 is spatially constant because the normal surface N (x, y) is spatially constant. Furthermore, it is assumed that the shadow regions 132 of the projected shadow 130 can be neglected. Figure 3 illustrates a process for testing the spatial constancy of the frame ratio within an image by evaluating its local spatial variance, according to one embodiment of the present invention. The stages are carried out 300 to 330 for each pixel, for two consecutive frame images. In step 300, the frame ratio is determined for each pixel within the change detection mask. The frame ratio for each pixel can be calculated using the following equation: skfry) £ > (* > y) h (*. y) Because the assumption is made that the luminance at the x position, and for the pixel has changed due to a projected shadow moving on a static background, the assumption can also be made that the background reflectance has not changed, and therefore tk (x, y) = tk + 1 (x, y). Therefore, by disregarding any camera noise, the frame ratio can be simplified to FR (x.y) * Ekfay) Consequently, when the luminance of the position x, y changes due to a moving projected shadow, the frame ratio is spatially constant in the neighborhood of x, y, because the assumption is established that the irradiance is constant as discuss before. Therefore, if the frame ratio is spatially constant locally, the assumption can be made that a shadow projected moving to the position x, y. The frame ratio is then tested for spatial constancy by evaluating its local spatial variance.
In step 310, the local spatial variance of the frame ratio for the pixel is compared to an illumination threshold. A small variance in local spatial variance will be tolerated to consider noise compensation. In conditional step 320, the determination is made as to whether the local spatial variance is below the illumination threshold. If the local spatial variance is below the illumination threshold, then the process advances to step 325 conditional. In conditional step 325, a determination is made as to whether the frame ratio in a local neighborhood (e.g., an area of 3 x 3 pixels around the pixel being considered) is uniformly above or below. If this condition is considered, then the process proceeds to step 330. In step 330, the pixel is classified on the assumption that it has a uniform temporal lighting change. The presentation of a pixel that has a uniform temporal change of illumination suggests the presence of a moving projected shadow in that pixel position. The illumination threshold can be calculated adaptively from the local variances of the frame ratio outside the change detection mask. Note that the process described in Figure 3 can erroneously detect a moving projected shadow if in the position x, and a uniformly colored rotating moving object is visible. In this case, the simplification of equation 4 to equation 5 would still be valid and the frame ratio would still be spatially constant, locally, although the mobile projected shadow would not exist in this particular pixel position. 3. Penumbra detection Because the assumption is made that the extent of the light source 110 is not negligible in comparison to the distance between the light source 110 and the object 100 in motion, the projected shadow 130 has regions 132 of penumbra. The projected shadow 130 can be detected by the existence and characteristics of the shadows 132. The shadow 132 of the projected shadow 130 causes a luminance stage 8 3 ß in the outline of a shadow 130. The luminance stage in a The image perpendicular to a shadow contour can be modeled by a luminance stage model as illustrated in Figure 5, according to one embodiment of the present invention. In this luminance stage model, the assumption is made that the luminance is linearly increased through shadow shadow 132 from a low luminance within a shadow (i.e., central shadow 131) to a high luminance outside of a luminance. the shadow (that is, background 120). The luminance stage within the penumbra 132 can be characterized by its stage height h, stage width and its gradient g, which is the same. AHA. If the width of a luminance stage caused by a w twilight 132 is much greater than that of the edges caused by the aperture of the video camera 140 for surface texture edges of the object 100 or the edges of the object 100, then it is You can use the luminance stage to detect. shadow. Table 1 characterizes the height, gradient and luminance stage width of different kinds of edges in an image. As it is shown in. Table 1, shadow edges can be differentiated from other edges by their luminance stage width. The luminance stage height alone is not an appropriate criterion by which shade edges can be differentiated because either the shadow edge caused by a bright light source 100 or a texture edge with a lot of contrast can cause a stage of high luminance. The luminance stage gradient alone is also not an appropriate criterion by which the shadow edges can be differentiated either because the gradient of the shadow edge caused by a bright light source 100 (with some extension) can be comparable to the texture edge with less contrast (and with a small aperture for the video camera 140).
Ta a 1 Figure 4 illustrates a process by which penumbras can be detected, according to one embodiment of the present invention. In step 400, mobile projected shadow boundary candidates are obtained from the limit of the change detection mask. The mobile projected shadow boundary candidate may pertain to the edges of the object 100 or to a shadow boundary, because the change detection mask contains image regions changed by the moving objects 100 or by the moving projected shadows 130. The number of mobile projected shadow limit candidates is low compared to the number of edges indicated by the known edge detection algorithms.
In addition, the known edge detection algorithms have difficulties in finding soft shadow contours. In addition, the number of candidates for mobile projected shadow boundary is further reduced because the object mask of the first image, if available, is connected to the change detection mask to fill holes within the mask. change detection. In addition, to improve accuracy, mobile projected shadow boundary candidates move perpendicular to the change detection mask limit to a position of greater luminance gradient. The gradient is measured perpendicular to the limit of the change detection mask using a Sobel operator. The operator Sobel consists of two filters of impulse impulse of finite duration (FIR) with the filter nuclei -2 0 2 and 0 0 0 -1 0 1 1 2 1 Steps 410, 420 and 430 are performed for each mobile projected shadow boundary candidate. In step 410, a width of spatial signal stages' of the frame difference is evaluated. In conditional step 420, the width of the spatial signal stage is compared to a width threshold. If the spatial signal stage width requires a width threshold, then the mobile projected shadow boundary candidate is classified as a mobile projected shadow boundary in step 430. To determine the spatial signal stage width for each boundary candidate of mobile projected shadow, the height and gradient of the signal stages perpendicular to the edge are measured for each mobile projected shadow boundary candidate. The height of the signal stage and the gradient in the frame difference between two consecutive images are measured because if the relevant edges are in the previous image or in the current one it depends on the unknown movement of the projected shadows and the objects. It can be measured. the height of the. signal stage by the difference of the average frame differences for both sides of the edge. For example, a window that averages three pixels by three pixels (for example, for the common intermediate format (CIF) of image format) can be placed next to the edge. The signal gradient can be measured using a Sobel operator aligned perpendicular to the edge. The direction of the edge can be measured by a regression line that evaluates the shadow projected mobile limit candidates in a neighborhood of an area of 7 pixels by 7 pixels. The spatial signal stage width, w, is equal to the height h, divided by the gradient g. The width threshold can be selected for the particular system as appropriate. For example, the width threshold can be equal to 2.4 pixels for a standard video camera. The width threshold may be lower for high definition television (HDTV) or higher for low light level video systems. 4. Detection of the Image Regions Changed by the Mobile Projected Shadows To detect the image regions changed by the moving projected shadows, the results of the three criteria of sections 1, 2 and 3 can be evaluated by heuristic rules. For each pixel of the change detection mask, a determination must be made if the changes are caused by a moving projected shadow or by some other phenomenon. Table 2 summarizes part of the heuristic evaluation rules to determine whether or not the pixel has been changed by a moving projected shadow. As the table illustrates, the first column considers whether a change has been detected. The second column evaluates the result of the edge classification of section 1. The third column indicates the result of the lighting change classification of section 2, and the fourth column indicates the decision of whether the pixel has changed by a shadow projected mobile. Additionally, the penumbra criterion of section 3 is evaluated in a local neighborhood of each pixel. If too many object edges are observed, the shadow hypothesis in Table 2 is rejected, that is, column 4.
Row # Results of Results of Decision: The pixel detection classification of change classification has changed by edge change (see lighting (see a shadow section 1) section 2) projected mobile 0 without change without edge without result NO 1 without change without uniform edge NO 2 without change without non-uniform edge NO 3 without change static edge without result NO 4 without change uniform static edge NO 5 without change static edge not uniform NO 6 without change edge mobile without result NO 7 without changing uniform mobile edge if any neighboring pixel satisfies row 13, 8 without change non-uniform mobile edge then YES; otherwise, NO 9 changed without edge without result NO 10 changed without uniform edge SI 11 changed without non-uniform edge if any neighboring pixel satisfies row 10 but not row 15, or rows 0-6 but not rows 12 and 17, then YES; otherwise NO 12 changed static edge without result If any neighboring pixel satisfies row 13 but not 11 and 17, then YES; otherwise NO 13 changed uniform static edge SI 14 changed non-uniform static edge SI 15 changed moving edge without result NO 16 changed uniform moving edge if any neighboring pixel satisfies row 13, then YES; otherwise NO 17 changed non-uniform moving edge NO Table 2 Figure 6 illustrates the application of heuristic rules for determining image regions changed by a moving projected shadow, in accordance with one embodiment of the present invention. Steps 600 to 650 are repeated for each pixel within the change detection mask. In step 600, a determination is made as to whether the pixel and neighboring pixels include a static background edge. The determination made in step 600 is carried out according to the process described in section 1 above. In step 610, a determination is made as to whether the pixel and the neighboring pixels are close to an edge with a spatial signal stage width greater than a width threshold. The determination of step 610 can be performed according to the process described in section 2 above. In step 620, a determination is made as to whether the pixel and the neighboring pixels have a uniform temporal illumination change. The determination made in step 620 can be made according to the process described in section 3 above. Steps 630 to 650 classify the pixels as changed by moving projected shadows according to the heuristic rules described in table 2 and section 3. In conditional step 630, the determinations made in step 600, 610 and 620 are evaluated. with respect to the pixel and its neighboring pixels. However, if at least two determinations for the pixel and its neighboring pixels do not succeed, then the process ends for that pixel. If at least two determinations succeed for the pixel and its neighboring pixel, then the process advances to step 650. In step 650, the pixel is classified as changed by a moving projected shadow. 5. Segmentation of Mobile Objects Considering Shadows Mobiles Figures 7 and 8 illustrate a process by which an estimation of two-dimensional shape of mobile objects in an image sequence can be applied to a sequence of images containing both moving objects and shadows of moving objects, according to a modality of the present invention. In step 700, the apparent motion for the video camera 140 or the background 120 can be estimated and compensated to reflect any kind of global movement, for example, caused by zooming and panning of the camera. In step 710, scene cut detection is performed by evaluating whether the mean square error between the current original sk + 1 frame and the motion of the compensated camera of the previous frame sk mc exceeds a given threshold. If the threshold is exceeded, then all the parameters are readjusted to their initial values. The scene cut detection performed in step 710 is only performed in the background regions of the previous frame which are taken from the previous object mask (0Mk). In this mask, all the pixels are adjusted to the front which belongs to a moving object, in the previous frame. In step 720, a change of detection mixture between two successive frames is estimated. This layer 720 is described in greater detail with reference to Figure 8 and will be discussed later.
In step 730, an initial object mask OM1 is calculated by removing the uncovered background areas and the final change detection mask CDMk + 1. Therefore, displacement information is used for pixels within the changed regions. The displacement is estimated by an inverter. hierarchical blocks (HBM). See M. Bierling, "Displacement estimation by hierarchical block matching", 3rd SPIE Syrnposium on Visual Ccmnunications and Image Processing, Cambridge, USA, p. 941-51, November 1988, which is incorporated herein by reference for background. For a greater precision of the calculated displacement vector field, the change detection mask of the first stage is considered by HBM. The uncovered background is detected by pixels with a lower or upper point of the corresponding displacement vector that is outside the area changed in the final change detection mask, CDM, ^. ' In step 740, the end object mask is estimated. In other words, the boundaries of the initial object mask, OM1, adapt to the luminance edges of the current image to improve accuracy. The final result is the final object mask 0Mk + 1. Figure 8 describes the process by which the change detection mask is estimated in step 720 of Figure 7, according to an embodiment of the present invention. In step 800, an initial change detection mask is determined, CDM1 based on a first image and a second image. In other words, an initial change detection mask, CDM1, is generated between two successive frames by generating difference thresholds of frames using a global threshold. In this initial change detection mask, pixels with a change in image luminance due to a moving object are marked as changed, the others are marked as unchanged. In step 810, a shadow portion of the change detection mask changed by the moving projected shadow is detected to produce a remaining portion of the change detection mask. The process for detecting the change portion of a change detection mask which is changed by the mobile projected mask can be performed according to the process described above with reference to sections 1, 2 and 3. In step 820, the boundaries of the image areas changed within the remaining portion of the change detection mask become uniform. These limits can become uniform, for example, by a relaxation technique that uses locally adapted thresholds. Accordingly, the process is automatically adapted by frames to the noise of the video camera 140. In step 830, the remaining portion that has become uniform from the change detection mask is combined with an object mask from a first image, if available, to produce an object change detection mask. This stage allows the production of temporarily stable object regions. More specifically, the object change detection mask contains all the pixels of the remaining portion of a change detection mask which are marked as changed, and additionally, all the pixels belonging to the mask object of the previous frame. This is based on the assumption that all the pixels which belong to the previous object mask must belong to the current object change detection mask. In addition, to avoid the propagation of infinite error, a pixel of the previous object mask is marked only as changed in the object change detection mask if it is also marked as changed in the remaining portion of the change detection mask. one of the last frames N. The value N corresponds to the time period in which this particular pixel has been identified as changed. The N value automatically adapts to the sequence by evaluating the size and range of motion of moving objects in the previous frame. In step 840, the small regions of the object change detection mask that are separated in the final change detection mask CDMk + 1 are deleted.
Of course, it should be understood that although the present invention has been described with reference to a particular system configuration and process, other systems and processes will be apparent to those usually familiar with the art. For example, although the present invention has been described with reference to an example of an arrangement of the object, light source, background and video camera, other arrangements are possible.l It is noted that in relation to this date, the best method known to the applicant to carry out the said invention, is that which is clear from the present description of the invention. Having described the invention as above, property is claimed as contained in the following:

Claims (20)

RETIFICATIONS
1. A method for detecting a static background border within a first image, characterized in that it comprises:. '(a) detecting a first plurality of edges in the first image; (b) detecting a second plurality of edges in a second image; (c) subtracting the first image from the second image to produce a frame difference; (d) apply a high frequency filter to the frame difference to produce a high frequency frame difference; (e) compare the difference of high frequency frame with a threshold; and (f) classifying, for each pixel of the first plurality of edges and the second plurality of edges, a static edge when the edge of the high frequency frame difference is below the threshold.
2. The method according to claim 1, characterized in that the threshold is computed adaptively based on the high frequency frame difference.
3. A method for detecting a uniform temporal illumination change in a pixel between a first image and a second image, characterized in that it comprises: (a) determining a frame ratio within a change detection mask based on a first image and a second image image, the change detection mask corresponds to a plurality of large difference image regions between the first image and the second image; (b) comparing a local spatial variance 'of the frame ratio for each pixel with an illumination threshold; and (c) classifying each pixel by considering that it has a uniform temporal change of illumination when the local spatial variance is less than the illumination threshold for that pixel.
4. The method according to claim 3, characterized in that the classification step (c) is carried out only when the frame ratio of. a subset of pixels neighboring the pixel has a frame ratio uniformly greater than or less than one.
5. A method for detecting boundaries of a mobile projected shadow within a change detection mask having a limit, the change detection mask corresponds to a difference between a first image and a second image, the method is characterized in that it comprises : (a) obtaining a plurality of mobile projected shadow boundary candidates from the limit of the change detection mask; (b) performing, for each shadow projected mobile shadow candidate of the plurality of mobile projected shadow boundary candidates, the following sub-steps:. (i) evaluating a spatial signal stage width; and (ii) classifying the mobile projected shadow boundary candidate as a mobile projected shadow boundary when the mobile projected shadow boundary candidate has a spatial signal stage width exceeding a width threshold. The method according to claim 5, characterized in that the spatial signal stage width evaluated in the evaluation step (b) (i) corresponds to the frame difference between the first image and the second image. The method according to claim 5, characterized in that the width threshold is related to an aperture of the camera that provides the first image and the second image. The method according to claim 5, characterized in that the spatial signal stage width evaluated in evaluation step (b) for each mobile projected shadow boundary candidate is determined along a line perpendicular to the candidate of mobile shadow projection limit, the perpendicular line is determined by linear regression of a plurality of positions of the neighboring mobile projected shadow boundary candidates. 9. The method according to claim 5, characterized in that the spatial signal stage width evaluated in evaluation step (b) for each mobile projected shadow boundary candidate corresponds to a spatial signal stage height., divided by a spatial signal stage gradient. The method according to claim 5, characterized in that the plurality of mobile projected shadow candidates obtained in the obtaining stage (a) includes the sub-step of: (i) replacing at least one projected shadow candidate mobile of the plurality of mobile projected shadow candidates, the mobile projected shadow candidate is located perpendicular to a limit of the change detection mask to a position of greater spatial gradient of the difference between the first image and. the second image. 11. A method for detecting an image region changed by a mobile projected shadow from a first image to a second image, the method is characterized in that it comprises: (a) performing, for each pixel within a change detection mask, the change detection mask corresponding to a difference between the first image and the second image, the following sub-steps: (i) determining whether the pixel and a plurality of. neighboring pixels include a static background border; (ii) determining whether the pixel and the plurality of neighboring pixels are close to an edge with a spatial design stage width greater than a threshold, the signal stage width is the difference between the first image and the second image, ( iii) determining whether the pixel and the plurality of neighboring pixels have a uniform temporal illumination change; and (iv) classify the pixel considering that it has changed by a mobile projected shadow when at least two determinations of the group of determinations in steps (a) (i), (a) (ii) and (a) (iii) have success for the pixel, and the plurality of neighboring pixels. The method according to claim 11, characterized in that the determination of sub-step (a) (i) includes "further the following sub-steps: (1) detecting a first plurality of edges in the first image; a second plurality of edges in a second image; (3) subtract the first image from the second image to produce a frame difference; (4) apply a high frequency filter to the frame difference to produce a high frame difference frequency; (5) comparing the difference of high frequency frame with a threshold; and (6) classifying, for each pixel of the first plurality of edges and the second plurality of edges, a static edge when the edge of the difference of The high frequency frame is below the threshold 13. The method according to claim 11, characterized in that the determination of sub-step (a) (ii) also includes the following sub-steps: (1) obtaining a plurality Candidate of mobile projected shadow limit candidates from the mask limit of. change detection; (2) performing, for each mobile projected shadow boundary candidate of the plurality of mobile projected shadow boundary candidates, the following sub-steps: (A) evaluating a spatial signal stage width; and (B) classify the mobile projected shadow boundary candidate as a mobile projected shadow boundary when. the mobile projected shadow boundary candidate has the spatial signal stage width exceeding a width threshold. The method according to claim 11, characterized in that the determination of step (a) (iii) further includes the following sub-steps: (1) determining a frame ratio within one. change detection mask based on a first image and a second image, the change detection mask corresponds to a difference between a first image and a second image; (2) compare a local spatial variance of the frame ratio for the pixel with a lighting threshold; and (3) classifying the pixel by considering that it has a uniform temporal change of illumination when the local spatial variance is below the illumination threshold for the pixel. 15. A method for segmenting a moving object in the front of a rigid background having a moving projected shadow, the method is characterized in that it comprises: "(a) determining a change detection mask in • base in a first image and a second image; (b) detecting an elimination portion of the change detection mask changed by the moving projected shadow to produce a remaining portion of the change detection mask; (c) combining the remaining portion of the detection mask change with mask vina object of the first image, if available, to produce an object change detection mask, and (d) remove from the object change detection mask a portion corresponding to the background not covered by the movement of the moving object to produce a mask of the moving object in the second image 1
6. The method according to claim 15, characterized by that the detection of step (b) further includes, for each pixel within a change detection mask and a plurality of neighboring pixels, the following sub-steps: (i) determining whether the pixel and the plurality of neighboring pixels include an edge static background; (ii) determining if the pixel and the plurality of neighboring pixels are close to an edge with a spatial signal stage width exceeding a threshold, the signal stage width is the difference between the first image and the second image; (iii) determining whether the pixel and the plurality of neighboring pixels have a uniform temporal illumination change; and (iv) classify the pixel considering that it has changed by a moving projected shadow when at least two determinations group of determinations in stages 1 (B) ii), (b) (ii) and (b) (iii) are successful for the pixel and for the plurality of neighboring pixels. The method according to claim 15, characterized in that the elimination step (d) further includes the sub-step of: (i) detecting the background not covered by the movement of the moving object within the change detection mask. 18. A method for detecting an uncovered background within a change detection mask, the change detection mask corresponds to a large difference between a first image and a second image, the method is characterized in that it comprises: (a ) estimating a displacement vector field having a vector for each pixel of the second image, the vector for each pixel of the second image points from a corresponding image position of the first image to the pixel of the second image; and (b) categorizing a portion of the change detection mask as an uncovered background where the pixels of the second image have a vector from the displacement vector field with an origin outside the change detection mask. 19. A method for temporarily segmenting and tracking a plurality of moving projected shadows on a static background for a plurality of pairs of sequential images, each image pair having a first image and a second image, the method is characterized in that it comprises: (a ) adjust each pixel of a mask to a first value, the first value does not indicate mobile projected shadow; (b) performing, for each pair of images of the plurality of pairs of sequential images, the following sub-steps: (1) detecting a plurality of image regions changed by. the moving projected shadows; (2) calculating a frame ratio by dividing each pixel of the second image by the corresponding pixel in the first image; (3) classify each image region detected in the detection step (1) in: (i) a first portion when the image region has a frame ratio greater than or equal to one, and (ii) a second portion when the image region has a frame ratio of less than one; (4) to use the first value in the mask of each image region classified as the first portion in the classification step (b) (1); and (5) adjusting to each second value in the mask each image region classified as the second portion in the classification step (b) (2). The method according to claim 19, characterized in that step (b) of embodiment further comprises the sub-step of: (6) eliminating small regions having the first value and small regions having the second value in the mask.
MX9809084A 1997-11-03 1998-10-30 Method for detecting moving cast shadows object segmentation. MXPA98009084A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US6410797P 1997-11-03 1997-11-03

Publications (1)

Publication Number Publication Date
MXPA98009084A true MXPA98009084A (en) 2006-01-20

Family

ID=36128712

Family Applications (1)

Application Number Title Priority Date Filing Date
MX9809084A MXPA98009084A (en) 1997-11-03 1998-10-30 Method for detecting moving cast shadows object segmentation.

Country Status (1)

Country Link
MX (1) MXPA98009084A (en)

Similar Documents

Publication Publication Date Title
US6349113B1 (en) Method for detecting moving cast shadows object segmentation
JP3862140B2 (en) Method and apparatus for segmenting a pixelated image, recording medium, program, and image capture device
US7317830B1 (en) Background estimation and segmentation based on range and color
EP0648360B1 (en) Tracking objects in video sequences
Zhang et al. Fast haze removal for nighttime image using maximum reflectance prior
Salvador et al. Cast shadow segmentation using invariant color features
Durucan et al. Change detection and background extraction by linear algebra
JP4668921B2 (en) Object detection in images
US8280106B2 (en) Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof
Yang et al. Depth hole filling using the depth distribution of neighboring regions of depth holes in the kinect sensor
CA2278423A1 (en) A method and apparatus for segmenting images prior to coding
JP2009533778A (en) Video segmentation using statistical pixel modeling
JPH08241414A (en) Moving object detection / tracking device and threshold value determination device
KR20170088227A (en) Apparatus and method for detecting foreground in image
US20090136092A1 (en) Object boundary accurate motion detection using hierarchical block splitting and motion segmentation
Yang et al. Moving cast shadow detection by exploiting multiple cues
Vosters et al. Background subtraction under sudden illumination changes
Dai et al. Adaptive sky detection and preservation in dehazing algorithm
GB2356514A (en) Film defect correction
KR101729536B1 (en) Apparatus and Method of Detecting Moving Object in Image
CN106485713B (en) Video foreground detection method
Kim et al. Robust foreground segmentation from color video sequences using background subtraction with multiple thresholds
MXPA98009084A (en) Method for detecting moving cast shadows object segmentation.
Kawarabuki et al. Snowfall detection in a foggy scene
Jyothisree et al. Shadow detection using tricolor attenuation model enhanced with adaptive histogram equalization

Legal Events

Date Code Title Description
FG Grant or registration