CN100371944C - Gray Image Segmentation Method Based on Reflected or Transmitted Light Intensity Distribution - Google Patents
Gray Image Segmentation Method Based on Reflected or Transmitted Light Intensity Distribution Download PDFInfo
- Publication number
- CN100371944C CN100371944C CNB2006100114169A CN200610011416A CN100371944C CN 100371944 C CN100371944 C CN 100371944C CN B2006100114169 A CNB2006100114169 A CN B2006100114169A CN 200610011416 A CN200610011416 A CN 200610011416A CN 100371944 C CN100371944 C CN 100371944C
- Authority
- CN
- China
- Prior art keywords
- image
- background
- light intensity
- point
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a greyscale image partition method based on the light distribution characteristic of reflection or penetrance, which belongs to the fields of digital image analysis and computer vision. The method is used for respectively searching the image background points of each row or each column of an image in an image row mode or in an image column mode according to the characteristic that the light distribution of the reflection or the penetrance presents a convex function or a concave function and the light-dark relation of a target and a background after the noise of the image is filtered. Key point coordinates are uniformly arranged on the image according to a vertically and horizontally equidistant mode. If the position of a key point is the position of the image background point which is found already, the coordinates do not need changing. If the position of the key point is not the position of the image background point which is found already, the coordinates of the background point which is nearest to the periphery of the key point are used as the new coordinates of the key point. An interpolation method is used to construct an image background by the key point. The absolute value of the difference of the constructed image background and the original image is used as a target image. The target image which is obtained is separated in a global threshold mode to get an expectant separation result. The method has the advantages of simple operation, higher precision and easy popularization.
Description
Technical Field
The present invention is in the field of digital image analysis and machine vision.
Background
Image segmentation is a very critical task in digital image processing and analysis, machine vision, and pattern recognition. Image segmentation is a technique and process for dividing an image into regions with characteristics and extracting an object of interest. At present, the commonly used image segmentation methods include a region-based segmentation method, a boundary-based segmentation method, and a combined segmentation method. Segmentation methods in which regions are segmented based on a threshold are widely used, particularly for target images taken in the same background. However, the conventional segmentation method considers only the characteristics of the image itself, and does not consider the characteristics of the illumination when the light source is illuminated. The invention aims to simply and accurately segment dark targets under a bright background or bright targets under a dark background according to the distribution characteristics of illumination intensity.
Disclosure of Invention
When image capture is performed, especially in industrial detection, we often capture dark objects against a bright background or capture bright objects against a dark background. If the light of the lamp light on the background is uniform, the intensity of the imaged background is also uniform, and the target can be well segmented only by using a threshold value which is equivalent to the intensity of the background. However, if the illumination is not uniform during the shooting process of the camera, the background of the shot image is also not uniform, and the segmentation method using the global threshold is not feasible. Therefore, some researchers have proposed a method of partitioning by region, in which the background is divided by a global threshold value again in an attempt to reduce the change in the background in a small region. The other method is to calculate the gradient of the image and take the place with large gradient as the background point, thereby calculating the image background by interpolation. However, these methods are not very good for the image with blurred target edges. Like other current image segmentation methods, these above methods basically only start from the features of the image itself, and do not utilize the characteristics of illumination. In the case where image processing and analysis and machine vision are widely used, the environment in which the image is taken is known, and the distribution of light can be estimated. The image is output by the camera, and the camera captures the light intensity reflected or transmitted to the optical sensor, so that the key point of the invention is to divide the gray-scale image by fully utilizing the distribution characteristic of the reflected or transmitted light intensity.
The basic principle of the invention is as follows: firstly, according to the characteristic that the reflection or transmission light intensity formed in a shot area by a light source is distributed in a convex function or a concave function, the light intensity distribution characteristic of an image background is deduced, then, in combination with the brightness relation between a target and the background, some background points are found from the image by using some properties of the convex function or the concave function, then interpolation key points are selected from the background points, then, the image background is generated by interpolation of the key points, finally, the difference between the image and the background is used as a target image, and then, threshold segmentation is carried out on the target image to obtain the expected segmentation result.
The invention is characterized by comprising the following steps:
step 1, calculating the distribution characteristic of the reflected or transmitted light intensity of a light source along a certain direction of a shot image on an x-y coordinate axis by a computer, and determining whether the light intensity distribution in an area where the image is located is in convex function distribution or concave function distribution;
step 1.1, shooting an image to be segmented by using a camera, and transmitting the image to a computer;
step 1.2, calculating a distribution function of the reflected or transmitted light intensity of a light source along a shot image in a certain direction of an x-y coordinate axis by a computer;
step 1.3, solving a second derivative of the light intensity distribution function obtained in the step 1.2;
step 1.4, determining the light intensity distribution characteristics of the image background: the area of the second derivative of the light intensity distribution function greater than 0 is a concave function, and the area less than 0 is a convex function;
and 2, searching image background points for each row or each column of the image in an image row or column mode according to the light intensity distribution characteristics obtained in the step 1 and the brightness relation between the target and the background respectively according to the light intensity calculation direction:
step 2.1, white noise is filtered out to remove interference; the filtering is performed by any one of the following filtering methods: smooth filtering, gaussian low-pass filtering and wiener filtering are carried out, so that a filtered digital image is obtained;
and 2.2, for the image which is in convex function distribution of light intensity and brighter than the background, performing the following processing: one line of data of the filtered image is taken along the light intensity calculating direction, the position of the data on the line is recorded as x, the gray value is recorded as y, and the data is calculated from the left end point (x) of the line of data L ,y L ) To each data point (x) i ,y i ) Introducing a chord if in (x) L ,y L ) And (x) i ,y i ) All points in between are above the chord, then (x) will be i ,y i ) Column is left candidate background points; from the right end point (x) of the line data R ,y R ) To each data point (x) i ,y i ) Introducing a string if in (x) R ,y R ) And (x) i ,y i ) All points in between are above the chord, then (x) will be i ,y i ) Listing as right candidate background points; selecting all points which are left candidate background points and right candidate background points as background points found by the image lines, and carrying out the same treatment on all the lines;
and 2.3, for the image with the light intensity in convex function distribution and the target darker than the background, processing as follows: one line of data of the filtered image is taken along the light intensity calculation direction, the position of the data in the line is recorded as x, the gray value is recorded as y, and the left end point (x) of the line of data is connected L ,y L ) And a right endpoint (x) R ,y R ) Forming a chord, finding the point (x) above the chord having the greatest distance to the chord 1 ,y 1 ) And will (x) L ,y L )、(x 1 ,y 1 ) And (x) R ,y R ) Marking as background points; background points to be markedConnecting two lines in the order from the left to the right of the x coordinate to form a chord, and marking the point which is positioned above the chord and has the maximum distance to the chord as a background point; on the basis of finding the background points, repeating the process of forming chords by connecting lines in pairs in the sequence from left to right of the x coordinate and finding the background points until no point exists on each chord; all the rows are processed in the same way;
step 2.4, for the image with the light intensity distributed in a concave function and the target darker than the background, the image is reversed, and the subsequent processing mode is the same as that in the step 2.2;
step 2.5, for the image with the light intensity distributed in a concave function and the target brighter than the background, the image is reversed, and the subsequent processing mode is the same as that in the step 2.3;
step 3, uniformly taking points from the image as key points of an image background generated by interpolation according to a mode of equidistant vertical and horizontal coordinate intervals; since the key point must be a background point, if the key point is not the position of the background point found in step 2, replacing it with the background point closest to the key point;
and 4, constructing an image background according to the key points obtained in the step 3 by utilizing any one of the following interpolation methods: the interpolation method is one of Lagrange interpolation, mean-difference Newton interpolation, segmented low-order interpolation and spline interpolation;
step 5, taking the difference between the original image and the constructed image background as a target image; the specific subtraction method is that if f (x, y) is the gray scale of the pixel (x, y) in the original image, and g (x, y) is the gray scale of the pixel (x, y) in the reconstructed background image, the gray scale value of the pixel of the target image at (x, y) is | g (x, y) -f (x, y) |;
step 6, after low-pass filtering is carried out on the target image, a target is segmented by utilizing a global threshold method;
the method not only can accurately describe the given image background information and segment the target image, but also can normalize the illumination intensity of each point, so that the segmentation result achieves higher accuracy, and the method is particularly favorable for quantitative analysis.
The invention will be further explained with reference to the drawings.
Description of the drawings:
FIG. 1 is a block diagram of the overall process of gray scale image segmentation according to the present invention;
FIG. 2 is a diagram of the hardware device of the present invention;
FIG. 2A is a test image showing the present invention in which the background brightness is a convex function distribution in the horizontal direction, and the target is brighter than the background;
FIG. 2B is a row of image signals of the test image of FIG. 2A according to the present invention;
FIG. 2C is a low-pass filtered row of image signals of FIG. 2A, showing a comparison of the same row of images before and after filtering with FIG. 2B;
FIG. 2D is a diagram illustrating the present invention finding a background point from the left end point for a line of images;
FIG. 2E is a diagram illustrating the present invention finding a background point from the right end point for a line of images;
FIG. 2F is a diagram of the common background points found in FIGS. 2D and 2E of the present invention;
FIG. 2G is a diagram of background points found in the test image of FIG. 2A according to the present invention;
FIG. 2H is a schematic diagram of the present invention uniformly setting interpolation key points in a test image like FIG. 2A in a vertically and horizontally equidistant manner;
FIG. 2I is a schematic diagram of the adjusted interpolation keypoints of FIG. 2H according to the present invention;
FIG. 2J is a background three-dimensional display of a test image constructed using the found interpolation key points of the present invention, like FIG. 2A;
FIG. 2K is a three-dimensional display of the test image of FIG. 2A used in the present invention;
FIG. 2L is a three-dimensional display of a target image found from a test image like FIG. 2A according to the present invention;
FIG. 2M shows the final segmentation result of the test image shown in FIG. 2A according to the present invention
FIG. 3A is a test image showing the present invention in which the luminance of the background is a convex function in the horizontal direction, and the target is darker than the background;
FIG. 3B is a low pass filtered row of image signals of FIG. 3A according to the present invention;
FIG. 3C is a diagram illustrating the present invention searching for a first background point in a row of images, excluding a peer;
FIG. 3D is a schematic diagram of the present invention continuing to find background points using the method of the present invention based on FIG. 3C;
FIG. 3E is a diagram of the background points eventually found in a row of images like FIG. 3B according to the present invention;
FIG. 3F is a diagram of background points found in the test image of FIG. 3A according to the present invention;
FIG. 3G is a schematic diagram of the present invention setting uniform interpolation key points in a test image like FIG. 3A in a vertically and horizontally equidistant manner;
FIG. 3H is a schematic diagram of the adjusted interpolation keypoints of FIG. 3G according to the present invention;
FIG. 3I is a background three-dimensional display of a test image constructed by the interpolation key points found in the present invention, like FIG. 3A;
FIG. 3J is a three-dimensional display of a test image like FIG. 3A used in the present invention;
FIG. 3K is a three-dimensional display of a target image found from the test image of FIG. 3A according to the present invention;
FIG. 3L shows the final segmentation result of FIG. 3A for the test image according to the present invention.
The specific implementation mode is as follows:
fig. 1 shows an overall flow chart of the gray scale image segmentation of the present invention, which mainly includes: the method comprises the following steps of firstly, calculating the distribution characteristic of illumination reflection or transmission light intensity along a certain direction of a shot image on an x-y coordinate axis to determine whether the background light intensity in an area where the image is located is in convex function distribution or concave function distribution; secondly, searching image background points for each row or each column of the image in an image row or column mode along the light intensity calculation direction after low-pass filtering the image according to the distribution characteristics of the reflected or transmitted light intensity and the brightness relation between the target and the background; thirdly, uniformly setting coordinates of key points on the image in a vertical and horizontal equidistant mode, if the positions of the key points are found background points, the coordinates are not changed, if not, the coordinates of the nearest surrounding background points are used as new coordinates of the key points, and the gray value of the key points is determined from the filtering image according to the vertical and horizontal coordinates of the key points; fourthly, constructing an image background by the found key points by using an interpolation method; fifthly, subtracting the constructed background image from the original image, and taking the absolute value of the difference as a target image; and sixthly, performing total local threshold segmentation on the obtained target image to obtain a desired segmentation result. The specific process is as follows:
the first step of the invention is to input the image collected by the camera into the computer, the computer calculates the distribution characteristic of the light intensity of the illumination reflection or transmission along a certain direction of the shot image, and determines whether the area of the image is in convex function distribution or concave function distribution, and the hardware equipment diagram is shown in figure 2. Because the change of the light irradiation intensity is continuously guided, the second derivative of the reflected or transmitted light intensity in the image area is obtained, whether the light intensity distribution of the area where the image is located is in convex function distribution or concave function distribution is judged by taking the second derivative as 0 as a boundary, the light intensity is in concave function distribution in the area where the second derivative is larger than 0, and the light intensity is in convex function distribution in the area where the second derivative is smaller than 0. Taking a point light source as an example, the light intensity distribution function can be obtained by the following formula: setting the distance between the point light source and the background as h, and taking the vertical projection of the point light source on the background as an original point, the light intensity corresponding to the point with the coordinates (x, y) is:the second derivative of the intensity in the x-axis direction is given by
If the reflected or transmitted light intensity is distributed in a convex function, directly jumping to the second step, if the reflected or transmitted light intensity is distributed in a concave function, inverting the gray scale of the image, and processing the gray scale by the following formula: I.C. A inv =255-I, where I denotes the grey value before inversion, I inv Representing the inverted gray value; so that the concave function distribution becomes a convex function distribution. In order to remove the interference of white noise and find the true background point, low-pass filtering processes such as gaussian low-pass filtering, smoothing filtering and wiener filtering are also performed in the first step. Fig. 2B and 2C are comparison graphs of the same line of images of the test image of the present invention after wiener filtering as fig. 2A before and after filtering, and it can be seen that white noise is substantially filtered.
After the distribution characteristics of the reflected or transmitted light intensity are determined, the second step of the invention is entered, and the image background point is searched for each row or each column of the image in the manner of image row or column according to the distribution characteristics of the light intensity and the bright-dark relation between the object and the background. Whether convex function distribution or concave function distribution, each distribution corresponds to two conditions of a bright target and a dark target, so that the combination has four conditions, (1) the background light intensity is in convex function distribution, and the target is darker than the background; (2) The background light intensity is distributed in a convex function, and the target is brighter than the background; (3) The background light intensity is distributed in a concave function, and the target is darker than the background; (4) The background light intensity is distributed in a concave function, and the target is brighter than the background. However, since the convex function is inverted to become the concave function and the concave function is inverted to become the convex function, we can classify the convex function distribution and the bright target and the concave function and the dark target into one class and classify the convex function distribution and the dark target and the concave function and the bright target into the second class according to the relationship between the convex function and the concave function, so that it is only necessary to divide the two classes of images. Here, the distribution of the background light intensity in the horizontal direction is a convex function or a concave function, and the other directions are similar. FIG. 2A shows a test image in which the intensity of a background light is distributed in a convex function in the horizontal direction and the target is darker than the background, and FIG. 3A shows another test image in which the intensity of a background light is distributed in the horizontal directionAnd the test image is distributed in a convex function in the horizontal direction, but the target is brighter than the background. In fig. 2A and 3A, a is an object to be separated, and b is a background. For the segmentation of the two types of images, different background point searching methods are adopted. For the first type of image, i.e. the case where the light intensity is distributed as a convex function of the background corresponds to a bright object. The method of the invention is to search background points line by line for the image. The specific process is as follows: taking a line of data of the image along the light intensity calculating direction, assuming that the data is along the horizontal direction, we will denote the position of the data on the line as x, and the gray scale value as y, and for the color image, the formula for performing the gray scale conversion process is: i = R × 0.3G × 0.59+ B × 0.11, and the calculation result of I is quantized with 256 levels; from the left end point (x) of the line data L ,y L ) To each data point (x) i ,y i ) Draw a chord, if in (x) L ,y L ) And (x) i ,y i ) All points in between are above the chord, then (x) i ,y i ) Column is left candidate background point, line segment c in fig. 2D is the chord drawn from the left end point, and small box D is the left candidate background point found; from the right end point (x) of the line data R ,y R ) To each data point (x) i ,y i ) Draw a chord, if in (x) R ,y R ) And (x) i ,y i ) All points in between are above the chord, then (x) will be i ,y i ) The column is the right candidate background point, the line segment E in fig. 2E is the chord drawn from the right end point, and the small triangle f is the found right candidate background point; the points above which are both left candidate background points and right candidate background points are selected as the background points found by the image row, and the small circles g in fig. 2F are the found background points, which are the results after the phase addition of D in the image 2D and F in 2E. For the entire image, the white pixel in FIG. 2G is the background point found in FIG. 2A using the above method. For the second type of image, i.e. the case where the light intensity is distributed as a convex function, the background corresponds to a dark object. The method of the invention is to search background points line by line for the image. The specific process is as follows: taking a line of the image along the light intensity calculation directionIn this case, the data is represented by x and the gray scale value is represented by y at the position of the line in the horizontal direction, and the formula for performing the gray scale conversion process for the color image is: i = R × 0.3+ G × 0.59+ B × 0.11, andthe calculation result of I adopts 256-level quantization; connecting the left end point (x) of the row of data L ,y L ) And a right endpoint (x) R ,y R ) Forming a chord, finding the point (x) above the chord having the greatest distance to the chord 1 ,y 1 ) And will be (x) L ,y L )、(x 1 ,y 1 ) And (x) R ,y R ) The line segment k in FIG. 3C, marked as the background point, is the left endpoint (x) L ,y L ) And a right endpoint (x) R ,y R ) A chord is formed and g is the background point found, whose distance to k is the largest. The marked background points are connected pairwise in order from left to right to form a chord, and the point located above the chord and having the largest distance to the chord is marked as the background point, which is given by (x) in fig. 3D L ,y L )、(x 1 ,y 1 ) And (x) R ,y R ) And continuing to find the background points for the background points. On the basis of the found background points, the process of forming chords by connecting lines in pairs in the order from left to right of the x coordinate and finding the background points is repeated until no more points exist on each chord, and the black point g in fig. 3E is the background point found from fig. 3B in the invention. After each line of image is processed, the background point of the whole image is obtained, and the white pixel in fig. 3F is the background point found from fig. 3A by the above method.
For the case that the background light intensity is distributed in a convex function or a concave function in the vertical direction or other directions, the above method can be used.
The third step is to find out the key points in the background points. The method adopted by the invention is that firstly, points are uniformly taken from the image as the key point coordinates of the background curved surface generated by interpolation according to the mode that the longitudinal and transverse coordinates are at equal intervals, and the round point H in the graphs 2H and 3G is the set key point coordinates. Since the key points must be background points, if the coordinates of the key points are not the background point positions found in the second step, the coordinates of the background point positions closest to the key points are used instead, and the dots H in fig. 2I and 3H are the positions of the key points found after adjustment, so that it can be seen that the grids formed by the key points are sparse in the target region a. With the coordinates of the key point positions, the gray value of the corresponding coordinate point is taken from the filtering image and is the gray value of the key point, namely the third-dimensional variable of the interpolation.
The fourth step of the invention is to construct the image background by utilizing interpolation methods such as Lagrange interpolation, or mean difference and Newton interpolation formula, or difference and equidistant node interpolation, or Hermite interpolation, or segmented low-order interpolation, or spline interpolation and the like according to the key points obtained above. Here, B-spline surface interpolation is taken as an example for explanation. The B-spline surface is formed by splicing a plurality of sample strip surface sheets, and for a bicubic B-spline surface sheet, the general formula is as follows:
Q(s,t)=S·M·G·M T ·T T 0≤s≤1,0≤t≤1,
wherein S = [ S ] 3 s 2 s1],T=[t 3 t 2 t1]Is a matrix of parameters that is,
is a basis matrix which is,
is a key point matrix of which g ij Is the key point we find.
Q (s, t) is the background of the interpolation.
FIGS. 2J and 3I are three-dimensional volumetric displays of background images produced by the above bicubic B-spline surface interpolation based on the control points determined in FIGS. 2I and 3H, respectively.
After the image background is reconstructed, it is the fifth step of the present invention to use the absolute value of the difference between the original image and the constructed image background as the target image. Specifically, if f (x, y) is the gray level of the pixel (x, y) in the original image, and g (x, y) is the gray level of the pixel (x, y) in the background of the reconstructed image, the gray level of the pixel in the (x, y) of the target image is | g (x, y) -f (x, y) |. Fig. 2K is a three-dimensional display of fig. 2A, and fig. 2L is a three-dimensional display of the absolute value of the gray scale difference of the pixel corresponding to fig. 2J and fig. 2K, that is, a three-dimensional display of the target image. Similarly, FIG. 3J is the three-dimensional display of FIG. 3A, and FIG. 3K is the three-dimensional display of the absolute value of the gray scale difference of the corresponding pixel of FIGS. 3I and 3J, i.e., the three-dimensional display of the target image.
The final step of the present invention is to segment the calculated target image. And after the target image is subjected to low-pass filtering, segmenting the target by using a global threshold method. The global threshold is theoretically 0, but in practice, it is generally 2 or 3 due to the presence of noise. Fig. 2M is the final segmentation result of fig. 2A according to the present invention, and fig. 3L is the final segmentation result of fig. 3A according to the present invention.
Claims (1)
1. The gray level image segmentation method based on the reflected or transmitted light intensity distribution characteristics is characterized by comprising the following steps of:
step 1, calculating the distribution characteristics of the reflected or transmitted light intensity of a light source along the horizontal direction or the vertical direction of an x-y coordinate axis of a shot image by a computer, and determining whether the light intensity distribution in an area where the image is located is convex function distribution or concave function distribution;
step 1.1, shooting an image to be segmented by using a camera, and transmitting the image to a computer;
step 1.2, calculating a distribution function of the reflected or transmitted light intensity of the light source along the horizontal direction or the vertical direction of the x-y coordinate axis of the shot image by a computer;
step 1.3, solving a second derivative of the light intensity distribution function obtained in the step 1.2;
step 1.4, determining the light intensity distribution characteristics of the image background: the area of the second derivative of the light intensity distribution function greater than 0 is a concave function, and the area less than 0 is a convex function;
and 2, searching image background points for each row or each column of the image in an image row or column mode according to the light intensity distribution characteristics obtained in the step 1 and the brightness relation between the target and the background respectively according to the light intensity calculation direction:
step 2.1, white noise is filtered out to remove interference; the filtering is carried out by adopting any one of the following filtering methods: smooth filtering, gaussian low-pass filtering and wiener filtering are carried out, so that a filtered digital image is obtained;
and 2.2, for the image with the light intensity distributed in a convex function and the target brighter than the background, processing as follows: taking one line of data of the filtered image along the light intensity calculating direction, recording the position of the data on the line as x, and recording the gray value as y; from the left end point (x) of the line data L ,y L ) To each data point (x) i ,y i ) Draw a chord, if in (x) L ,y L ) And (x) i ,y i ) All points in between are above the chord, then (x) will be i ,y i ) Column is left candidate background point; from the right end point (x) of the line data R ,y R ) To each data point (x) i ,y i ) Introducing a string if in (x) R ,y R ) And (x) j ,y j ) All points in between are above the chord, then (x) will be i ,y i ) Listing as right candidate background points; selecting all points which are left candidate background points and right candidate background points as background points found by the image lines, and carrying out the same treatment on all the lines;
and 2.3, for the image with the light intensity in convex function distribution and the target darker than the background, processing as follows: taking a line of data of the filtered image along the light intensity calculating direction, and recording the position of the data in the line as x and the gray value as y; left end point (x) connecting the line data L ,y L ) And a right endpoint (x) R ,y R ) Forming a chord, finding the point (x) above the chord having the greatest distance to the chord 1 ,y 1 ) And will be (x) L ,y L )、(x 1 ,y 1 ) And (x) R ,y R ) Marking as background points; connecting the marked background points pairwise into chords according to the sequence from left to right of the x coordinate, and marking the point which is positioned above the chords and has the maximum distance to the chords as the background points; on the basis of finding the background points, repeating the process of forming chords by connecting lines in pairs in the sequence from left to right of the x coordinate and finding the background points until no point exists on each chord; all the rows are processed in the same way;
step 2.4, for the image with the light intensity distributed in a concave function and the target darker than the background, the image is reversed, and the subsequent processing mode is the same as that in the step 2.2;
step 2.5, for the image with the light intensity distributed in a concave function and the target brighter than the background, the image is reversed, and the subsequent processing mode is the same as that in the step 2.3;
step 3, uniformly taking points from the image as key points of an image background generated by interpolation according to a mode of equidistant vertical and horizontal coordinate intervals; since the key point must be a background point, if the key point is not the position of the background point found in step 2, replacing it with the background point closest to the key point;
and 4, constructing an image background by utilizing any one of the following interpolation methods according to the key points obtained in the step 3: the interpolation method is one of Lagrange interpolation, mean-difference Newton interpolation, segmented low-order interpolation and spline interpolation;
step 5, taking the difference between the original image and the constructed image background as a target image; the specific subtraction method is that if f (x, y) is the gray scale of the pixel (x, y) in the original image, and g (x, y) is the gray scale of the pixel (x, y) in the reconstructed background image, the gray scale value of the pixel of the target image at (x, y) is | g (x, y) -f (x, y) |;
and 6, after low-pass filtering the target image, segmenting the target by using a global threshold method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100114169A CN100371944C (en) | 2006-03-03 | 2006-03-03 | Gray Image Segmentation Method Based on Reflected or Transmitted Light Intensity Distribution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100114169A CN100371944C (en) | 2006-03-03 | 2006-03-03 | Gray Image Segmentation Method Based on Reflected or Transmitted Light Intensity Distribution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1811794A CN1811794A (en) | 2006-08-02 |
CN100371944C true CN100371944C (en) | 2008-02-27 |
Family
ID=36844706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100114169A Expired - Fee Related CN100371944C (en) | 2006-03-03 | 2006-03-03 | Gray Image Segmentation Method Based on Reflected or Transmitted Light Intensity Distribution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100371944C (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544491A (en) | 2013-11-08 | 2014-01-29 | 广州广电运通金融电子股份有限公司 | Optical character recognition method and device facing complex background |
CN108280854A (en) * | 2016-12-31 | 2018-07-13 | 长春北方化工灌装设备股份有限公司 | A kind of subcircular target rapid detection method of industrial picture |
CN107220646B (en) * | 2017-05-25 | 2020-04-14 | 杭州健培科技有限公司 | Medical image character recognition enhancing method for removing background interference |
CN112560637B (en) * | 2020-12-10 | 2024-03-15 | 长沙理工大学 | Deep learning-based clothing analysis method, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1092372C (en) * | 1997-05-30 | 2002-10-09 | 王介生 | Iris recoganizing method |
US6738520B1 (en) * | 2000-06-19 | 2004-05-18 | Intel Corporation | Method of compressing an image |
CN1184796C (en) * | 2001-07-26 | 2005-01-12 | 佳能株式会社 | Image processing method and equipment, image processing system and storage medium |
-
2006
- 2006-03-03 CN CNB2006100114169A patent/CN100371944C/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1092372C (en) * | 1997-05-30 | 2002-10-09 | 王介生 | Iris recoganizing method |
US6738520B1 (en) * | 2000-06-19 | 2004-05-18 | Intel Corporation | Method of compressing an image |
CN1184796C (en) * | 2001-07-26 | 2005-01-12 | 佳能株式会社 | Image processing method and equipment, image processing system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN1811794A (en) | 2006-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109583345B (en) | Road recognition method, device, computer device and computer readable storage medium | |
CN108416766B (en) | Double-side light-entering type light guide plate defect visual detection method | |
CN107784669A (en) | A kind of method that hot spot extraction and its barycenter determine | |
JP5812705B2 (en) | Crack detection method | |
US6507675B1 (en) | Structure-guided automatic learning for image feature enhancement | |
Chen et al. | Road Damage Detection and Classification Using Mask R-CNN with DenseNet Backbone. | |
CN111354047B (en) | Computer vision-based camera module positioning method and system | |
CN113239733A (en) | Multi-lane line detection method | |
CN106023134A (en) | Automatic grain boundary extraction method for steel grain | |
CN113111878A (en) | Infrared weak and small target detection method under complex background | |
Liu et al. | Image edge recognition of virtual reality scene based on multi-operator dynamic weight detection | |
CN111105452A (en) | High-low resolution fusion stereo matching method based on binocular vision | |
CN115601379A (en) | An Accurate Detection Technology for Surface Cracks Based on Digital Image Processing | |
CN100371944C (en) | Gray Image Segmentation Method Based on Reflected or Transmitted Light Intensity Distribution | |
CN114881965A (en) | Wood board joint detection method based on artificial intelligence and image processing | |
CN113284158A (en) | Image edge extraction method and system based on structural constraint clustering | |
CN119477709B (en) | Method for enhancing contour of fluorescence in-situ hybridization image by means of illumination fusion and local binary | |
CN110570450B (en) | Target tracking method based on cascade context-aware framework | |
CN118887544B (en) | A tunnel lining apparent defect recognition method based on query matching attention | |
CN119515825A (en) | A method for detecting industrial glass defects based on Visionmaster | |
CN112652004A (en) | Image processing method, device, equipment and medium | |
CN117974753A (en) | Bridge tunnel crack depth measurement method | |
CN115330821B (en) | Image segmentation algorithm based on watershed constraint and edge connection | |
CN113269236B (en) | Assembly body change detection method, device and medium based on multi-model integration | |
Aqeel | The Use of Threshold Technique in image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080227 Termination date: 20130303 |