[go: up one dir, main page]

EP2570990A1 - Apparatus and method for determining a confidence value of a disparity estimate - Google Patents

Apparatus and method for determining a confidence value of a disparity estimate Download PDF

Info

Publication number
EP2570990A1
EP2570990A1 EP11306142A EP11306142A EP2570990A1 EP 2570990 A1 EP2570990 A1 EP 2570990A1 EP 11306142 A EP11306142 A EP 11306142A EP 11306142 A EP11306142 A EP 11306142A EP 2570990 A1 EP2570990 A1 EP 2570990A1
Authority
EP
European Patent Office
Prior art keywords
pixels
pixel
group
disparity estimate
disparity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP11306142A
Other languages
German (de)
French (fr)
Inventor
Markus Schlosser
Joern Jachalsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to EP11306142A priority Critical patent/EP2570990A1/en
Priority to US13/611,727 priority patent/US8897545B2/en
Publication of EP2570990A1 publication Critical patent/EP2570990A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the invention relates to a method and to an apparatus for determining a confidence value of a disparity estimate, and more specifically to a method and to an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images.
  • 3D-TV 3D-video and 3D-cinema
  • information of two or even more images is joined together for production of a spatial reproduction of image content.
  • two stereoscopic images are used for computation of depth information, wherein a matching process is applied to find point correspondences in the two input or basic images.
  • the displacement between two corresponding points in the basic images resulting from the different positions of the cameras when capturing the real world scene is commonly referred to as disparity.
  • a 3D-structure i.e. the depth information of the captured scene, may be reconstructed from these disparities by triangulation if the camera parameters are known.
  • Depth information for the pixels in the basic images is usually integrated into a disparity map containing the result of the respective matching calculations.
  • the performance of the stereo matching process inherently depends on the underlying image content. Even for ideal conditions there still remain several problems, e.g. occluded areas in one of the input pictures, perspective deformations due to lens distortions, specular reflections or missing texture in some region of the image, etc., that make the matching process a challenging task. For some parts of an image it is inherently more difficult to determine accurate values for the disparity, also referred to as disparity estimates, than for others. This leads to varying levels of accuracy and reliability for the disparity estimates.
  • the above can be accomplished with a confidence evaluation, which determines the reliability of a disparity estimate to evaluate whether it is an accurate point correspondence or not.
  • the confidence evaluation provides a certain level of selectivity.
  • An increased selectivity of the confidence evaluation leads to a higher share of accurate point correspondences at the cost of a reduced coverage.
  • the share of accurate point correspondences is close to 100% for the highest confidence values or an interval comprising only the highest confidence values and then it slowly decreases for lower confidence values with a high concentration of the remaining inaccurate point correspondences at the confidence of 0.
  • a method for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images, wherein the confidence value is a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels comprises the steps of:
  • the general idea is to evaluate the neighborhood relations of pixels with reliable disparity estimates to increase the selectivity of the confidence evaluation. Thereby, the distances of pixels with reliable disparity estimates to pixels with unreliable disparity estimates are used to calculate the confidence value of the disparity estimate. Reliable estimates are considered more reliable if surrounded by other reliable estimates and thus far away from unreliable ones.
  • the initial reliability value of the pixel or the group of pixels is determined from a remaining distance between the pixel or the group of pixels in a first stereo image and a back reference of a corresponding pixel or a corresponding group of pixels in a second stereo image, wherein the corresponding pixel or the corresponding group of pixels is defined by the disparity estimate for the pixel or the group of pixels of the selected image.
  • the remaining distance is a measure for the inconsistency and can be determined based on the left-right consistency. Moreover, it is highly suited to determine the initial reliability.
  • the disparity estimate of the pixel or the group of pixels is classified as unreliable if the remaining distance is equal to or larger than an upper threshold, e.g. three pixels.
  • the disparity estimate of the pixel or the group of pixels is classified as reliable if the remaining distance is equal to or lower than a lower threshold.
  • the lower threshold is zero.
  • the disparity estimate is assumed to be reliable.
  • the disparity estimate is assumed to be unreliable.
  • a visibility of the pixel or the group of pixels is determined, wherein a pixel or a group of pixels in the first stereo image is visible if it is matched by at least one pixel or at least one group of pixels in the second stereo image, and wherein a pixel or a group of pixels in the first stereo image is not visible if it is not matched by any pixel or any group of pixels in the second stereo image.
  • the disparity estimate of the pixel or the group of pixels is also classified as reliable if the remaining distance is equal to the lower threshold plus one and the pixel or the group of pixels is not visible.
  • Both the disparity estimation as well as the confidence evaluation are preferably done on full-pel resolution only, in order to limit the necessary computational effort. Taking the visibility into account allows to handle the case of horizontally slanted surfaces, which can have a different width in the two views. As a consequence, the disparity estimation for one view needs to omit a pixel every now and then in the other view.
  • the disparity estimate of the pixel or the group of pixels is classified as undecided if the disparity estimate of the pixel or the group of pixels is neither reliable nor unreliable.
  • Undecided disparity estimates are used to determine a second distance, namely the distance of the pixel or the group of pixels to a nearest pixel or group of pixels with a not-reliable disparity estimate, wherein the not-reliable disparity estimate is either undecided or unreliable.
  • the confidence value of the disparity estimate for the pixel or the group of pixels is then determined from the first distance and the second distance. This allows to give different weights to clearly unreliable disparity estimates and to disparity estimates that are not-reliable, but not necessarily unreliable.
  • a special consistency value is derived from the first distance, an upper bound for the first distance, the second distance, and a range factor.
  • the confidence value of the disparity estimate for the pixel or the group of pixels is then determined by multiplying the special consistency value with a scaled matching quality value.
  • the scaled matching quality value is a scaled correlation coefficient obtained by a zero-mean normalized cross correlation or another similarity measure.
  • a confidence value of zero is assigned to the disparity estimate of the pixel or the group of pixels if the disparity estimate of the pixel or the group of pixels is classified as unreliable. This allows to fully exclude unreliable disparity estimates from further processing steps, e.g. refinement or post-processing.
  • an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images, wherein the confidence value is a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels is adapted to perform the above described method steps for determining the confidence value.
  • the purpose of the confidence evaluation is to determine the reliability of a disparity estimate and thus to evaluate whether it is an accurate point correspondence or not.
  • the resulting confidence values gradually indicate the level of reliability of the corresponding disparity estimates ranging from unreliable (lowest confidence value) to highly reliable (highest confidence value).
  • the confidence evaluation combines the quality of a stereo match (in terms of the matching metric used) with its consistency (in terms of the uniqueness and visibility constraints) in a single continuous confidence value that explicitly models the reliability of the disparity estimate.
  • the general idea is that only those estimates are considered reliable that achieve a high matching score and are consistent.
  • Input for the confidence evaluation are two disparity maps D L and D R estimated for an image pair consisting of a left and right view.
  • D L is the map for the estimation from left to right view
  • D R is the map for the estimation from right to left view.
  • D L is the map for the estimation from left to right view
  • D R is the map for the estimation from right to left view.
  • a method according to the invention for determining a confidence value is schematically depicted in Fig. 1 .
  • a first step 1 of the confidence evaluation the left-right consistency is determined utilizing the uniqueness constraint.
  • For each pixel p in the left view it is checked if the corresponding pixel p + D L ( p ) in the right view refers back to the original pixel p.
  • d LR is calculated for each pixel p in the left view. In addition, it is checked if the pixel p also satisfies the visibility constraint. The latter requires that a visible pixel is matched by at least one pixel in the other view and a non-visible pixel is not matched. Instead of performing the calculations on a pixel basis, it is likewise possible to use groups of pixels.
  • a second step 2 the disparity estimate of each pixel is classified as either unreliable, reliable, or undecided based on the inconsistency d LR and visibility.
  • the disparity estimate of a pixel is considered unreliable if d LR is equal to or larger than an upper threshold th up , i.e. d LR ⁇ th up .
  • the upper threshold th up is 3.
  • d LR ⁇ th lo d LR th lo + 1 and pixel is not visible , where th lo is a lower threshold.
  • the lower threshold th lo is 0.
  • the second condition in equation (3) is introduced as both the disparity estimation as well as the confidence evaluation are done on full-pel resolution only. It handles the case of horizontally slanted surfaces, which can have a different width in the two views. As a consequence, the disparity estimation for one view needs to omit a pixel every now and then in the other view. Thus, for the omitted pixel, which is not visible, the remaining distance d LR is one higher than the remaining distance for the adjacent not-omitted pixel. In the preferable case with th lo set to 0 d LR is 1 for the omitted pixel.
  • Fig. 2 The two distances are illustrated in Fig. 2 , where light grey pixels have a reliable disparity estimate, dark grey pixels have an unreliable disparity estimate, and white pixels are undecided.
  • an upper bound is specified, namely d un_max and d not_max .
  • the upper bounds define the distance that a pixel with a reliable disparity estimate must have to the next pixel with an unreliable and/or undecided disparity estimate in order to achieve a maximum confidence.
  • These upper bounds are introduced to limit the search range and, thus, limit the processing costs.
  • pixels that are too far away, i.e. not in the spatial proximity provide only little additional information to determine the confidence of the disparity estimate of the considered pixel.
  • the maximum confidence is achieved for d un ⁇ d un_max and d not ⁇ d not_max .
  • the scaling factor ⁇ is introduced to have an identical and thus comparable maximum for the confidence values while varying the upper bounds and, hence, the selectivity.
  • a confidence of 0 is assigned to unreliable. disparity estimates.
  • d un is 0.
  • C scv is calculated using the below equation (8).
  • C un ⁇ 1 ⁇ d un ⁇ d un_max - ⁇ 2 ⁇ d un / ⁇ 2
  • C not ⁇ ⁇ d not - ⁇ / ⁇
  • C scv C un + C not
  • C mqv is a scaled correlation coefficient, which is preferably obtained by a zero-mean normalized cross correlation, and is in the interval of [0.0,1.0].
  • The greater the value of ⁇ , the greater the distance of a pixel with a reliable disparity estimate to pixels with unreliable and not reliable disparity estimates must be to obtain a high confidence, which results in an increased selectivity.
  • the highest confidence is typically achieved for disparity values inside of objects.
  • the disparities at the object borders are assigned lower values.
  • the increased selectivity can - amongst others- be used to diminish the impact of errors caused by enlarged foreground objects due to the employed window-based estimation approach.
  • FIG. 3 displays the left views of the four stereo pairs Tsukuba ( Fig. 3a )), Venus ( Fig. 3b )), Teddy ( Fig. 3c )), and Cones ( Fig. 3d )) of the Middlebury test set, which was used as an input for the present evaluation.
  • Tsukuba Fig. 3a
  • Venus Fig. 3b
  • Teddy Fig. 3c
  • Cones Fig. 3d
  • Fig. 4 depicts the ground truth for the four stereo pairs
  • Fig. 5 shows in white the masks for non-occluded and non-border pixels.
  • Fig. 4a) and Fig. 4b depict the original, unaltered masks for Tsukuba and Venus as provided on the Middlebury website, the masks for Teddy and Cones have a border region that was set to be four pixels wide, i.e. half the window size. This region was introduced to simplify the window handling.
  • the disparities were estimated for the two directions left-to-right and right-to-left. In contrast to the above cited work of H. Hirschmüller et al. no post-processing or refinement was done. Finally, the confidence values were calculated as described above. For the present evaluation the confidence values were only calculated for the disparity map D L . The disparity estimation and confidence evaluation was done on full-pel resolution only and no additional sub-pel interpolation was applied.
  • the range of the confidence values was split into intervals and for each interval the share of bad matches for all non-occluded pixels covered by the interval was determined.
  • a pixel is considered a bad match if its disparity differs by more than one from the ground truth.
  • the confidence values were normalized to be in the range from 0.0 (unreliable) to 1.0 (highly reliable).
  • Table 1 displays the results as an average over all four image pairs (Tsukuba, Venus, Teddy, and Cones) for different range factors ⁇ .
  • the sub-columns 'Coverage' contain the pixel coverage and the sub-columns 'Error' the corresponding share of bad pixels.
  • Table 2 Tsukuba Venus Teddy Cones Interval I k Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%] [1.0;0.9] 4.87 0.02 14.30 0.10 9.36 0.03 12.98 0.19 [1.0;0.8] 9.39 0.11 21.93 0.13 15.80 0.13 20.90 0.32 [1.0;0.7] 14.80 0.26 28.73 0.19 23.77 0.38 29.39 0.39 [1.0;0.6] 20.84 0.44 35.88 0.29 33.39 0.82 38.71 0.48 [1.0;0.5] 29.10 0.57 43.99 0.47 44.01 1.61 48.22 0.72 [1.0;0.4] 38.53 0.74 52.03 0.58
  • Figs. 6a annotated disparity images for one of the four stereo pairs (Teddy) are depicted in Figs. 6a ) to j).
  • black areas designate occluded pixels or pixels belonging to the border region
  • white areas designate bad matches
  • light grey areas designate good matches
  • dark grey areas designate pixels outside the confidence interval.
  • the results of the evaluation clearly substantiate that the selectivity of the confidence evaluation can be increased with only a single parameter, the range factor ⁇ that defines the minimum distances that a pixel with a reliable disparity estimate must have to pixels with unreliable or not reliable disparity estimates to be assigned a maximum confidence value.
  • the increased selectivity leads to a lower share of bad matches, especially for the higher confidence intervals, but at the cost of a reduced coverage.
  • the results show the improved concentration of bad matches at the confidence of 0.
  • there is a trade-off between an increased selectivity (lower share of bad pixels for the higher intervals) and the achieved coverage there is a trade-off between an increased selectivity (lower share of bad pixels for the higher intervals) and the achieved coverage.
  • the selection of a range factor suited best clearly depends on the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method and an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images are described, the confidence value (C) being a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels. First an initial reliability value of the disparity estimate for the pixel or the group of pixels is determined (2), wherein the reliability is one of at least reliable and unreliable. Then a distance (d un) of the pixel or the group of pixels to a nearest pixel or group of pixels with an unreliable disparity estimate is determined (3). Finally, the confidence value (C) of the disparity estimate for the pixel or the group of pixels is obtained (5) from the determined distance (d un).

Description

  • The invention relates to a method and to an apparatus for determining a confidence value of a disparity estimate, and more specifically to a method and to an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images.
  • In 3D-TV, 3D-video and 3D-cinema, information of two or even more images is joined together for production of a spatial reproduction of image content. Typically, two stereoscopic images are used for computation of depth information, wherein a matching process is applied to find point correspondences in the two input or basic images. The displacement between two corresponding points in the basic images resulting from the different positions of the cameras when capturing the real world scene is commonly referred to as disparity. A 3D-structure, i.e. the depth information of the captured scene, may be reconstructed from these disparities by triangulation if the camera parameters are known. Depth information for the pixels in the basic images is usually integrated into a disparity map containing the result of the respective matching calculations.
  • The performance of the stereo matching process inherently depends on the underlying image content. Even for ideal conditions there still remain several problems, e.g. occluded areas in one of the input pictures, perspective deformations due to lens distortions, specular reflections or missing texture in some region of the image, etc., that make the matching process a challenging task. For some parts of an image it is inherently more difficult to determine accurate values for the disparity, also referred to as disparity estimates, than for others. This leads to varying levels of accuracy and reliability for the disparity estimates.
  • For some applications, e.g. for subtitling or positioning of graphical overlays, it is beneficial to select a reliable or even highly reliable subset of disparity estimates from a dense disparity map in order to create a reliable or highly reliable sparse disparity map. Moreover, for post-production purposes it is beneficial to accurately mark problematic and non-problematic regions to process them with special algorithms etc.
  • The above can be accomplished with a confidence evaluation, which determines the reliability of a disparity estimate to evaluate whether it is an accurate point correspondence or not. To this end the confidence evaluation provides a certain level of selectivity. An increased selectivity of the confidence evaluation leads to a higher share of accurate point correspondences at the cost of a reduced coverage. Ideally, the share of accurate point correspondences is close to 100% for the highest confidence values or an interval comprising only the highest confidence values and then it slowly decreases for lower confidence values with a high concentration of the remaining inaccurate point correspondences at the confidence of 0.
  • It is thus an object of the invention to propose a method for determining a confidence value of a disparity estimate for a pixel or a group of pixels.
  • According to one aspect of the invention, a method for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images, wherein the confidence value is a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels, comprises the steps of:
    • determining an initial reliability value of the disparity estimate for the pixel or the group of pixels, wherein the initial reliability value is one of at least reliable and unreliable;
    • determining a first distance of the pixel or the group of pixels to a nearest pixel or group of pixels with an unreliable disparity estimate; and
    • determining the confidence value of the disparity estimate for the pixel or the group of pixels from the determined distance.
  • The general idea is to evaluate the neighborhood relations of pixels with reliable disparity estimates to increase the selectivity of the confidence evaluation. Thereby, the distances of pixels with reliable disparity estimates to pixels with unreliable disparity estimates are used to calculate the confidence value of the disparity estimate. Reliable estimates are considered more reliable if surrounded by other reliable estimates and thus far away from unreliable ones.
  • Advantageously, the initial reliability value of the pixel or the group of pixels is determined from a remaining distance between the pixel or the group of pixels in a first stereo image and a back reference of a corresponding pixel or a corresponding group of pixels in a second stereo image, wherein the corresponding pixel or the corresponding group of pixels is defined by the disparity estimate for the pixel or the group of pixels of the selected image.
  • The remaining distance is a measure for the inconsistency and can be determined based on the left-right consistency. Moreover, it is highly suited to determine the initial reliability.
  • Preferably, the disparity estimate of the pixel or the group of pixels is classified as unreliable if the remaining distance is equal to or larger than an upper threshold, e.g. three pixels. Similarly, the disparity estimate of the pixel or the group of pixels is classified as reliable if the remaining distance is equal to or lower than a lower threshold. Favorably, the lower threshold is zero.
  • In this way it is ensured that only for pixels or groups of pixels with a very small or even no left-right inconsistency the disparity estimate is assumed to be reliable. For pixels or groups of pixels with a rather large left-right inconsistency the disparity estimate is assumed to be unreliable.
  • Advantageously, a visibility of the pixel or the group of pixels is determined, wherein a pixel or a group of pixels in the first stereo image is visible if it is matched by at least one pixel or at least one group of pixels in the second stereo image, and wherein a pixel or a group of pixels in the first stereo image is not visible if it is not matched by any pixel or any group of pixels in the second stereo image. Based on the determined visibility, the disparity estimate of the pixel or the group of pixels is also classified as reliable if the remaining distance is equal to the lower threshold plus one and the pixel or the group of pixels is not visible.
  • Both the disparity estimation as well as the confidence evaluation are preferably done on full-pel resolution only, in order to limit the necessary computational effort. Taking the visibility into account allows to handle the case of horizontally slanted surfaces, which can have a different width in the two views. As a consequence, the disparity estimation for one view needs to omit a pixel every now and then in the other view.
  • Favorably, the disparity estimate of the pixel or the group of pixels is classified as undecided if the disparity estimate of the pixel or the group of pixels is neither reliable nor unreliable.
  • Undecided disparity estimates are used to determine a second distance, namely the distance of the pixel or the group of pixels to a nearest pixel or group of pixels with a not-reliable disparity estimate, wherein the not-reliable disparity estimate is either undecided or unreliable. The confidence value of the disparity estimate for the pixel or the group of pixels is then determined from the first distance and the second distance. This allows to give different weights to clearly unreliable disparity estimates and to disparity estimates that are not-reliable, but not necessarily unreliable.
  • Preferably, a special consistency value is derived from the first distance, an upper bound for the first distance, the second distance, and a range factor. The confidence value of the disparity estimate for the pixel or the group of pixels is then determined by multiplying the special consistency value with a scaled matching quality value. The scaled matching quality value is a scaled correlation coefficient obtained by a zero-mean normalized cross correlation or another similarity measure.
  • Favorably, a confidence value of zero is assigned to the disparity estimate of the pixel or the group of pixels if the disparity estimate of the pixel or the group of pixels is classified as unreliable. This allows to fully exclude unreliable disparity estimates from further processing steps, e.g. refinement or post-processing.
  • Advantageously, an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images, wherein the confidence value is a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels, is adapted to perform the above described method steps for determining the confidence value.
  • For a better understanding the invention shall now be explained in more detail in the following description with reference to the figures. It is understood that the invention is not limited to this exemplary embodiment and that specified features can also expediently be combined and/or modified without departing from the scope of the present invention as defined in the appended claims. In the figures:
  • Fig. 1
    schematically depicts a method according to the invention,
    Fig. 2
    illustrates the distances of a pixel to pixels with unreliable and not reliable disparity estimates,
    Fig. 3
    depicts the left views of four stereo pairs,
    Fig. 4
    shows the ground truth for the four stereo pairs of Fig. 3,
    Fig. 5
    depicts masks for non-occluded and non-border pixels for the four stereo pairs of Fig. 3, and
    Fig. 6
    shows annotated disparity maps for the Teddy stereo image pair for a range factor γ=3.
  • The purpose of the confidence evaluation is to determine the reliability of a disparity estimate and thus to evaluate whether it is an accurate point correspondence or not. The resulting confidence values gradually indicate the level of reliability of the corresponding disparity estimates ranging from unreliable (lowest confidence value) to highly reliable (highest confidence value). In the approach that is described in the following, the confidence evaluation combines the quality of a stereo match (in terms of the matching metric used) with its consistency (in terms of the uniqueness and visibility constraints) in a single continuous confidence value that explicitly models the reliability of the disparity estimate. The general idea is that only those estimates are considered reliable that achieve a high matching score and are consistent.
  • Input for the confidence evaluation are two disparity maps D L and D R estimated for an image pair consisting of a left and right view. D L is the map for the estimation from left to right view and D R is the map for the estimation from right to left view. Hereinafter only the calculation of the confidence for the disparity map D L is described. The calculation of the confidence for the other disparity map D R is done in an analogous manner.
  • A method according to the invention for determining a confidence value is schematically depicted in Fig. 1. In a first step 1 of the confidence evaluation the left-right consistency is determined utilizing the uniqueness constraint. For each pixel p in the left view it is checked if the corresponding pixel p+D L(p) in the right view refers back to the original pixel p. The potentially remaining distance d LR between the back reference and the original pixel is a measure for the inconsistency and can be calculated as d LR p = D L p + D R p + D L p .
    Figure imgb0001
  • The term d LR is calculated for each pixel p in the left view. In addition, it is checked if the pixel p also satisfies the visibility constraint. The latter requires that a visible pixel is matched by at least one pixel in the other view and a non-visible pixel is not matched. Instead of performing the calculations on a pixel basis, it is likewise possible to use groups of pixels.
  • In a second step 2 the disparity estimate of each pixel is classified as either unreliable, reliable, or undecided based on the inconsistency dLR and visibility. The disparity estimate of a pixel is considered unreliable if d LR is equal to or larger than an upper threshold thup , i.e. d LRthup . Preferably, the upper threshold thup is 3. For a reliable disparity estimate of a pixel either of the following conditions must be fulfilled. d LR th lo
    Figure imgb0002
    d LR = th lo + 1 and pixel is not visible ,
    Figure imgb0003

    where thlo is a lower threshold. Preferably, the lower threshold thlo is 0. The second condition in equation (3) is introduced as both the disparity estimation as well as the confidence evaluation are done on full-pel resolution only. It handles the case of horizontally slanted surfaces, which can have a different width in the two views. As a consequence, the disparity estimation for one view needs to omit a pixel every now and then in the other view. Thus, for the omitted pixel, which is not visible, the remaining distance dLR is one higher than the remaining distance for the adjacent not-omitted pixel. In the preferable case with thlo set to 0 d LR is 1 for the omitted pixel.
  • Disparity estimates that are neither unreliable nor reliable are classified as undecided. Based on this initial classification the following two distances are determined in a third step 3:
    • d un: distance to the next pixel with an unreliable disparity estimate;
    • d not: distance to the next pixel with a not reliable disparity estimate, which is either undecided or unreliable.
  • The two distances are illustrated in Fig. 2, where light grey pixels have a reliable disparity estimate, dark grey pixels have an unreliable disparity estimate, and white pixels are undecided. For each distance an upper bound is specified, namely d un_max and d not_max. The upper bounds define the distance that a pixel with a reliable disparity estimate must have to the next pixel with an unreliable and/or undecided disparity estimate in order to achieve a maximum confidence. These upper bounds are introduced to limit the search range and, thus, limit the processing costs. Moreover, pixels that are too far away, i.e. not in the spatial proximity, provide only little additional information to determine the confidence of the disparity estimate of the considered pixel.
  • Both bounds include a scaling factor γ: d not_max = γ k not
    Figure imgb0004
    d un_max = γ k un ,
    Figure imgb0005
    where k not is a base value for the distance to the next pixel with a not reliable disparity estimate and k un is a base value for the distance to the next pixel with an unreliable disparity estimate. The maximum confidence is achieved for d und un_max and d notd not_max. The scaling factor γ is introduced to have an identical and thus comparable maximum for the confidence values while varying the upper bounds and, hence, the selectivity.
  • In a fourth step 4 a confidence of 0 is assigned to unreliable. disparity estimates. For a pixel with an unreliable disparity estimate d un is 0. For each pixel with a reliable disparity estimate a special consistency value C scv is calculated using the below equation (8). For a pixel with a reliable disparity estimate it holds that d un>0 and d not>0. C un = α 1 d un d un_max - α 2 d un / γ 2
    Figure imgb0006
    C not = β d not - γ / γ ,
    Figure imgb0007
    C scv = C un + C not
    Figure imgb0008
  • In the above formulae α1, α2, and β are parameterization coefficients. Pixels with undecided disparity estimates are treated separately. For those it holds that d un>0 and d not=0. Their special consistency value is calculated using the below equation (9) instead of the above equation (8): C scv = δ C un
    Figure imgb0009
  • In a fifth and final step 5 the special consistency value C scv is multiplied with a scaled matching quality value C mqv to obtain the final confidence value C. C = C mqv C scv
    Figure imgb0010
  • C mqv is a scaled correlation coefficient, which is preferably obtained by a zero-mean normalized cross correlation, and is in the interval of [0.0,1.0].
  • The greater the value of γ, the greater the distance of a pixel with a reliable disparity estimate to pixels with unreliable and not reliable disparity estimates must be to obtain a high confidence, which results in an increased selectivity. The highest confidence is typically achieved for disparity values inside of objects. The disparities at the object borders are assigned lower values. Thus, the increased selectivity can - amongst others- be used to diminish the impact of errors caused by enlarged foreground objects due to the employed window-based estimation approach.
  • In the following the results of confidence evaluation in accordance with the present invention shall be described. For the disparity estimation a local window-based approach was employed with zero-mean normalized cross correlation as cost function. For further details see H. Hirschmüller et al.: "Evaluation of stereo matching costs on images with radiometric differences", IEEE Transact. Patt. Anal. Mach. Intell. Vol. 31 (2009), pp. 1582-1599. The costs were calculated for the complete disparity range [0;d max-1] and the disparity with the lowest aggregated costs was selected (so-called "winner-takes-all"-mechanism). The window size was set to 8×8 and d max=60. Fig. 3 displays the left views of the four stereo pairs Tsukuba (Fig. 3a)), Venus (Fig. 3b)), Teddy (Fig. 3c)), and Cones (Fig. 3d)) of the Middlebury test set, which was used as an input for the present evaluation. For details of the Middlebury test set see http://vision.middlebury.edu/stereo/ and D. Scharstein et al.: "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms", Int. J. .Comput. Vis. Vol. 47 (2002), pp. 7-42. Fig. 4 depicts the ground truth for the four stereo pairs and Fig. 5 shows in white the masks for non-occluded and non-border pixels. For the sake of brevity, they are both hereinafter referred to as non-occluded pixels. While Fig. 4a) and Fig. 4b) depict the original, unaltered masks for Tsukuba and Venus as provided on the Middlebury website, the masks for Teddy and Cones have a border region that was set to be four pixels wide, i.e. half the window size. This region was introduced to simplify the window handling.
  • The disparities were estimated for the two directions left-to-right and right-to-left. In contrast to the above cited work of H. Hirschmüller et al. no post-processing or refinement was done. Finally, the confidence values were calculated as described above. For the present evaluation the confidence values were only calculated for the disparity map DL. The disparity estimation and confidence evaluation was done on full-pel resolution only and no additional sub-pel interpolation was applied.
  • In order to assess the selectivity of the confidence evaluation, the range of the confidence values was split into intervals and for each interval the share of bad matches for all non-occluded pixels covered by the interval was determined. A pixel is considered a bad match if its disparity differs by more than one from the ground truth. For the present evaluation the confidence values were normalized to be in the range from 0.0 (unreliable) to 1.0 (highly reliable).
  • For each confidence interval I k the number of non-occluded pixels N nocc;k covered by the interval was determined and afterwards the number of bad matches N bad;k among those non-occluded pixels. Therewith, for each interval the coverage, which is the ratio of N nocc;k and the total number of non-occluded pixels in the view, as well as the corresponding share of bad pixels, which is the ratio of N bad;k and N nocc;k, was calculated.
  • Table 1 displays the results as an average over all four image pairs (Tsukuba, Venus, Teddy, and Cones) for different range factors γ. The parameter set used for the evaluation was k not=4.0, k un=7.0, α1=5, α2=0.5, β=25, and δ=0.5. The sub-columns 'Coverage' contain the pixel coverage and the sub-columns 'Error' the corresponding share of bad pixels. Table 1
    γ 1.0 2.0 3.0 4.0
    Interval Ik Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%]
    [1.0;0.9] 41.41 1.40 22.94 0.32 11.01 0.11 4.89 0.00
    [1.0;0.8] 55.40 1.84 32.83 0.46 17.87 0.19 9.10 0.09
    [1.0;0.7] 63.34 2.11 41.77 0.74 25.23 0.31 14.41 0.18
    [1.0;0.6] 70.57 2.73 50.54 1.15 33.46 0.51 21.47 0.29
    [1.0;0.5] 75.53 3.10 58.83 1.60 42.67 0.89 29.41 0.52
    [1.0;0.4] 79.39 3.61 65.72 2.15 51.68 1.29 38.63 0.81
    [1.0;0.3] 82.61 4.15 70.17 2.57 58.94 1.79 46.78 1.21
    [1.0;0.2] 85.92 4.57 76.47 3.29 66.36 2.45 55.90 1.83
    [1.0;0.1] 90.90 5.93 82.20 4.24 73.14 3.31 63.15 2.57
    [1.0;0.0] 100.00 11.12 100.00 11.12 100.00 11.12 100.00 11.12
  • It is apparent that the confidence evaluation becomes more selective with an increasing range factor γ. For the highest interval [1.0;0.9] the share of bad matches decreases from 1.40% for γ=1 to 0.00% for γ=4. At the same time the coverage is reduced from 41.41% to only 4.89%. In addition, for γ=1 the interval [0.1;0.0] comprises roughly 10% of all non-occluded pixels, for γ=4 these are 37%, resulting in a sub-optimal coverage. Thus, there is a trade-off between selectivity and coverage for the interval [1.0;0.1].
  • Table 2 provides a closer look at the results for γ=3.0 depicting the results for each image pair separately. It reveals how the coverage as well as the share of bad pixels deviate among the four image pairs. Table 2
    Tsukuba Venus Teddy Cones
    Interval I k Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%] Coverage [%] Error [%]
    [1.0;0.9] 4.87 0.02 14.30 0.10 9.36 0.03 12.98 0.19
    [1.0;0.8] 9.39 0.11 21.93 0.13 15.80 0.13 20.90 0.32
    [1.0;0.7] 14.80 0.26 28.73 0.19 23.77 0.38 29.39 0.39
    [1.0;0.6] 20.84 0.44 35.88 0.29 33.39 0.82 38.71 0.48
    [1.0;0.5] 29.10 0.57 43.99 0.47 44.01 1.61 48.22 0.72
    [1.0;0.4] 38.53 0.74 52.03 0.58 54.22 2.41 56.78 1.12
    [1.0;0.3] 47.68 1.09 58.56 0.78 61.56 3.23 63.57 1.65
    [1.0;0.2] 57.64 1.75 65.70 1.18 68.71 4.21 69.98 2.29
    [1.0;0.1] 67.64 2.83 72.86 1.73 74.67 5.31 75.22 3.16
    [1.0;0.0] 100.00 13.67 100.00 7.53 100.00 14.29 100.00 10.12
  • With an average coverage of 11.01% for the highest interval [1.0;0.9] the coverage for the individual image pairs ranges from 4.87% for Tsukuba to 14.30% for Venus. For the interval [1.0;0.1] with an average coverage of nearly 75% the coverage for the individual images deviates from 67.64% to 75.22%. One explanation for this is that the degree of difficulty to find accurate point correspondences varies among the four image pairs. But at the same time it outlines the power of the confidence evaluation to discriminate between good and bad matches.
  • The results for γ=4 are interesting, as an exceptionally low share of bad matches is achieved for the higher intervals. In addition, the concentration of bad matches in the lowest interval [0.1;0.0] is quite high. On the other hand, over 30% of all non-occluded pixels are covered by the lowest interval comprising also a lot of good matches. Here a better discrimination between good and bad matches would be beneficial.
  • To further illustrate the results for γ=3.0, annotated disparity images for one of the four stereo pairs (Teddy) are depicted in Figs. 6a) to j). In the figures, black areas designate occluded pixels or pixels belonging to the border region, white areas designate bad matches, light grey areas designate good matches, and dark grey areas designate pixels outside the confidence interval.
  • The results of the evaluation clearly substantiate that the selectivity of the confidence evaluation can be increased with only a single parameter, the range factor γ that defines the minimum distances that a pixel with a reliable disparity estimate must have to pixels with unreliable or not reliable disparity estimates to be assigned a maximum confidence value. The increased selectivity leads to a lower share of bad matches, especially for the higher confidence intervals, but at the cost of a reduced coverage. The average share of bad pixels can be reduced to 0% for the highest confidence interval [1.0;0.9] for γ=4. Moreover, the results show the improved concentration of bad matches at the confidence of 0. In the end, there is a trade-off between an increased selectivity (lower share of bad pixels for the higher intervals) and the achieved coverage. Thus, the selection of a range factor suited best clearly depends on the application.

Claims (12)

  1. A method for determining a confidence value (C) of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images, wherein the confidence value (C) is a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels, the method comprising the steps of:
    - determining (2) an initial reliability value of the disparity estimate for the pixel or the group of pixels, wherein the initial reliability value is one of at least reliable and unreliable;
    - determining (3) a distance (d un) of the pixel or the group of pixels to a nearest pixel or group of pixels with an unreliable disparity estimate; and
    - determining (5) the confidence value (C) of the disparity estimate for the pixel or the group of pixels from the determined distance (d un).
  2. The method according to claim 1, wherein the initial reliability value of the pixel or the group of pixels is determined (2) from a remaining distance (dLR) between the pixel or the group of pixels in a first stereo image and a back reference of a corresponding pixel or a corresponding group of pixels in a second stereo image, wherein the corresponding pixel or the corresponding group of pixels is defined by the disparity estimate for the pixel or the group of pixels of the selected image.
  3. The method according to claim 2, wherein the disparity estimate of the pixel or the group of pixels is classified (2) as unreliable if the remaining distance (dLR) is equal to or larger than an upper threshold, and wherein the disparity estimate of the pixel or the group of pixels is classified (2) as reliable if the remaining distance (dLR) is equal to or lower than a lower threshold.
  4. The method according to claim 2, wherein the upper threshold is three pixels and the lower threshold is zero.
  5. The method according to one of claims 1 to 4, further comprising the step of determining a visibility of the pixel or the group of pixels, wherein a pixel or a group of pixels in the first stereo image is visible if it is matched by at least one pixel or at least one group of pixels in the second stereo image, and wherein a pixel or a group of pixels in the first stereo image is not visible if it is not matched by any pixel or any group of pixels in the second stereo image.
  6. The method according to claim 5, wherein the disparity estimate of the pixel or the group of pixels is also classified (2) as reliable if the remaining distance (dLR) is equal to the lower threshold plus one and the pixel or the group of pixels is not visible.
  7. The method according to one of claims 1 to 6, wherein the disparity estimate of the pixel or the group of pixels is classified (2) as undecided if the disparity estimate of the pixel or the group of pixels is neither reliable nor unreliable.
  8. The method according to claim 7, further comprising the steps of
    - determining (3) a second distance (d not) of the pixel or the group of pixels to a nearest pixel or group of pixels with a not-reliable disparity estimate, wherein the not-reliable disparity estimate is either undecided or unreliable; and
    - determining (5) the confidence value of the disparity estimate for the pixel or the group of pixels from the determined first distance (d un) and the determined second distance (d not).
  9. The method according to claim 8, further comprising the step of determining (4) a special consistency value (C scv) from the determined first distance (d un), an upper bound (d un,max) for the first distance (d un), the determined second distance (d not), and a range factor.
  10. The method according to claim 9, wherein the confidence value (C) of the disparity estimate for the pixel or the group of pixels is determined (5) by multiplying the special consistency value (C scv) with a scaled matching quality value (C mqv).
  11. The method according to one of claims 1 to 10, wherein a confidence value (C) of zero is assigned to the disparity estimate of the pixel or the group of pixels if the disparity estimate of the pixel or the group of pixels is classified as unreliable.
  12. An apparatus for determining a confidence value (C) of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images, wherein the confidence value (C) is a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels, characterized in that the apparatus is adapted to perform a method according to one of claims 1 to 11 for determining the confidence value (C).
EP11306142A 2011-09-13 2011-09-13 Apparatus and method for determining a confidence value of a disparity estimate Withdrawn EP2570990A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP11306142A EP2570990A1 (en) 2011-09-13 2011-09-13 Apparatus and method for determining a confidence value of a disparity estimate
US13/611,727 US8897545B2 (en) 2011-09-13 2012-09-12 Apparatus and method for determining a confidence value of a disparity estimate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP11306142A EP2570990A1 (en) 2011-09-13 2011-09-13 Apparatus and method for determining a confidence value of a disparity estimate

Publications (1)

Publication Number Publication Date
EP2570990A1 true EP2570990A1 (en) 2013-03-20

Family

ID=44651573

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11306142A Withdrawn EP2570990A1 (en) 2011-09-13 2011-09-13 Apparatus and method for determining a confidence value of a disparity estimate

Country Status (2)

Country Link
US (1) US8897545B2 (en)
EP (1) EP2570990A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369341A (en) * 2013-07-09 2013-10-23 宁波大学 Post-processing method of range image
EP2922027A1 (en) * 2014-03-20 2015-09-23 Ricoh Company, Ltd. Disparity deriving apparatus, method and carrier medium
GB2537831A (en) * 2015-04-24 2016-11-02 Univ Oxford Innovation Ltd Method of generating a 3D representation of an environment and related apparatus
EP3317850A4 (en) * 2015-07-02 2018-07-04 Ricoh Company, Ltd. Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011104151A1 (en) * 2010-02-26 2011-09-01 Thomson Licensing Confidence map, method for generating the same and method for refining a disparity map
US9201580B2 (en) 2012-11-13 2015-12-01 Adobe Systems Incorporated Sound alignment user interface
US9355649B2 (en) 2012-11-13 2016-05-31 Adobe Systems Incorporated Sound alignment using timing information
US10249321B2 (en) 2012-11-20 2019-04-02 Adobe Inc. Sound rate modification
US9451304B2 (en) 2012-11-29 2016-09-20 Adobe Systems Incorporated Sound feature priority alignment
US10455219B2 (en) 2012-11-30 2019-10-22 Adobe Inc. Stereo correspondence and depth sensors
US9208547B2 (en) 2012-12-19 2015-12-08 Adobe Systems Incorporated Stereo correspondence smoothness tool
US10249052B2 (en) 2012-12-19 2019-04-02 Adobe Systems Incorporated Stereo correspondence model fitting
US9214026B2 (en) * 2012-12-20 2015-12-15 Adobe Systems Incorporated Belief propagation and affinity measures
JP6121776B2 (en) * 2013-03-29 2017-04-26 ソニー株式会社 Image processing apparatus and image processing method
US10341634B2 (en) 2016-01-29 2019-07-02 Samsung Electronics Co., Ltd. Method and apparatus for acquiring image disparity
US10748244B2 (en) 2017-06-09 2020-08-18 Samsung Electronics Co., Ltd. Systems and methods for stereo content detection
CN109961092B (en) * 2019-03-04 2022-11-01 北京大学深圳研究生院 Binocular vision stereo matching method and system based on parallax anchor point
CN112183613B (en) * 2020-09-24 2024-03-22 杭州睿琪软件有限公司 Object recognition method and apparatus and non-transitory computer readable storage medium
DE102021125575A1 (en) * 2021-10-01 2023-04-06 Carl Zeiss Microscopy Gmbh Microscopy system and method for instance segmentation
CN114742847B (en) * 2022-04-18 2025-02-14 北京信息科技大学 A light field cutout method and device based on empty angle consistency

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011104151A1 (en) * 2010-02-26 2011-09-01 Thomson Licensing Confidence map, method for generating the same and method for refining a disparity map

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606406B1 (en) 2000-05-04 2003-08-12 Microsoft Corporation System and method for progressive stereo matching of digital images
US7599547B2 (en) 2005-11-30 2009-10-06 Microsoft Corporation Symmetric stereo model for handling occlusion
US8488870B2 (en) * 2010-06-25 2013-07-16 Qualcomm Incorporated Multi-resolution, multi-window disparity estimation in 3D video processing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011104151A1 (en) * 2010-02-26 2011-09-01 Thomson Licensing Confidence map, method for generating the same and method for refining a disparity map

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
D. SCHARSTEIN ET AL.: "A taxonomy and evaluation of dense two- frame stereo correspondence algorithms", INT. J..COMPUT. VIS., vol. 47, 2002, pages 7 - 42
H. HIRSCHMULLER ET AL.: "Evaluation of stereo matching costs on images with radiometric differences", IEEE TRANSACT. PATT. ANAL. MACH. INTELL., vol. 31, 2009, pages 1582 - 1599, XP011248100
JACHALSKY J ET AL: "Confidence evaluation for robust, fast-converging disparity map refinement", MULTIMEDIA AND EXPO (ICME), 2010 IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 19 July 2010 (2010-07-19), pages 1399 - 1404, XP031761528, ISBN: 978-1-4244-7491-2 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369341A (en) * 2013-07-09 2013-10-23 宁波大学 Post-processing method of range image
EP2922027A1 (en) * 2014-03-20 2015-09-23 Ricoh Company, Ltd. Disparity deriving apparatus, method and carrier medium
US9595110B2 (en) 2014-03-20 2017-03-14 Ricoh Company, Ltd. Disparity deriving apparatus, movable apparatus, robot, method of deriving disparity, method of producing disparity, program, and storage medium
GB2537831A (en) * 2015-04-24 2016-11-02 Univ Oxford Innovation Ltd Method of generating a 3D representation of an environment and related apparatus
EP3317850A4 (en) * 2015-07-02 2018-07-04 Ricoh Company, Ltd. Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system
US10520309B2 (en) 2015-07-02 2019-12-31 Ricoh Company, Ltd. Object recognition device, object recognition method, equipment control system, and distance image generation device

Also Published As

Publication number Publication date
US20130064443A1 (en) 2013-03-14
US8897545B2 (en) 2014-11-25

Similar Documents

Publication Publication Date Title
US8897545B2 (en) Apparatus and method for determining a confidence value of a disparity estimate
US12437432B2 (en) Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
EP2064675B1 (en) Method for determining a depth map from images, device for determining a depth map
EP2511875A1 (en) Apparatus and method for refining a value of a similarity measure
KR20150037668A (en) Method and device for edge shape enforcement for visual enhancement of depth image based rendering of a three-dimensional video stream
US20140218357A1 (en) Image processing device, image processing method, and program
Khoddami et al. Depth map super resolution using structure-preserving guided filtering
CN110827338B (en) Regional self-adaptive matching light field data depth reconstruction method
Chien et al. Virtual view synthesis using RGB-D cameras
Lu et al. Interpolation error as a quality metric for stereo: Robust, or not?
Zuo et al. Edge-Based Algorithm for Multi-view Depth Map Generation

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20130921