[go: up one dir, main page]

US20010012395A1 - Automated inspection of objects undergoing general affine transformation - Google Patents

Automated inspection of objects undergoing general affine transformation Download PDF

Info

Publication number
US20010012395A1
US20010012395A1 US09/141,932 US14193298A US2001012395A1 US 20010012395 A1 US20010012395 A1 US 20010012395A1 US 14193298 A US14193298 A US 14193298A US 2001012395 A1 US2001012395 A1 US 2001012395A1
Authority
US
United States
Prior art keywords
image
run
training
time
affine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/141,932
Other versions
US6421458B2 (en
Inventor
David J. Michael
Igor Reyzin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cognex Corp
Original Assignee
Cognex Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognex Corp filed Critical Cognex Corp
Priority to US09/141,932 priority Critical patent/US6421458B2/en
Assigned to COGNEX CORPORATION reassignment COGNEX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REYZIN, IGOR, MICHAEL, DAVID J.
Publication of US20010012395A1 publication Critical patent/US20010012395A1/en
Application granted granted Critical
Publication of US6421458B2 publication Critical patent/US6421458B2/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Definitions

  • Machine or “artificial” vision systems are commonly employed for the automated inspection of objects. In manufacturing applications for example, machine vision systems distinguish those objects manufactured within acceptable tolerance levels (i.e. “good parts”), from objects manufactured outside acceptable tolerance levels (“bad parts”).
  • Contemporary automated inspection techniques include generally the steps of statistical training and run-time inspection.
  • a number of acceptable objects are presented that can be at a range of positions and orientations relative to the vision system.
  • the system interrogates the objects and formulates statistical images of the acceptable objects.
  • the statistical images comprise a template, or average, image, and an acceptable statistical variation of the average image, referred to as threshold image, which is often computed from a variance or standard deviation image.
  • the information learned about the object during statistical training is, in turn, applied to the run-time inspection of parts of unknown quality.
  • the run-time images obtained during run-time inspection are compared to the template image and the differences are analyzed. Where the analyzed differences exceed a known, predetermined value, the part is considered a defect. Otherwise, the part is acceptable.
  • Both statistical training and run-time inspection processes include the steps of registration and computation.
  • registration an alignment of the object image or “target” relative to an alignment model origin is performed.
  • the output of the alignment process is the spatial coordinates of a predetermined origin of the target relative to the alignment model origin.
  • the spatial coordinates comprise a real number including a whole pixel portion and a sub-pixel portion. Translation of the whole pixel portion is relatively straightforward. A well-known technique referred to as “re-windowing” is used to perform this translation.
  • each sub-pixel bin requires at least two accumulators-one for the template image and one for the threshold image. As more sub-pixel bins are added to improve system resolution and therefore lower inspection errors due to sub-pixel misregistration, the system is further burdened by the need for additional storage and image accumulators. Furthermore, the quality of the statistics of each sub-pixel bin is a direct function of the amount and quality of training data stored in the bin. If a bin does not contain much training data, then statistics in that bin are relatively poor and therefore inspection errors are more likely to occur.
  • the present invention is directed to a method and system for statistical training of a machine vision system on an object, and is further directed to a method and system for automated inspection of objects using the results of such statistical training.
  • the invention addresses the aforementioned limitations of conventional techniques, and provides an inspection process which is relatively less demanding on system storage, and improves system speed and accuracy.
  • a General Affine Transform is advantageously employed to improve system performance.
  • the affine poses of a plurality of training images are determined with respect to an alignment model image.
  • the training images and their corresponding affine poses are applied to an affine transformation.
  • the resulting transformed images are accumulated to compute template and threshold images to be used for ran-time inspection.
  • the affine pose of the run-time image relative to the alignment model image is determined.
  • the run-time image is affine transformed by its affine pose.
  • the resulting transform image is compared to the template and threshold images computed during statistical training to determine object status. In this manner, automated training and inspection is relatively less demanding on system storage, and results in an improvement in system speed and accuracy.
  • the present invention is directed to a method for statistical training of an artificial vision system on an object.
  • a plurality of training images are generated by iteratively imaging one or a number of training objects.
  • the affine pose of each training image with respect to an alignment model image is next determined.
  • Each training image is prefiltered to generate filtered images.
  • Each filtered image is transformed with its corresponding affine pose to generate a plurality of transformed images.
  • a template image and threshold image of the object are then computed from the plurality of transformed images.
  • the present invention is directed to a method for automated inspection of an object.
  • the object is first imaged to generate a run-time image.
  • the affine pose of the run-time image with respect to an alignment model image is then determined.
  • the run-time image is prefiltered to generate a filtered image.
  • the filtered image is transformed with its affine pose to generate a transformed image.
  • the transformed image is mean-corrected by the template image, and the mean-corrected image is compared with a threshold image to produce an error image.
  • the error image is analyzed to determine object status.
  • the alignment model image may be selected as one of or a part of one of the training images collected during statistical training.
  • a geometric model of the object may also be employed as an alignment model image.
  • the template image comprises an average image of the transformed training images
  • the threshold image comprises an allowable variation of the average image, for example, a linear function of a standard deviation image.
  • the affine pose is preferably computed by determining the General Affine Transform parameters which accurately map the training and run-time images to the alignment model image.
  • the training and run-time images are convolved with a kernel suitable for eliminating high spatial frequency elements from the image that match the worst-case spatial frequency effects of the affine interpolator.
  • the kernel comprises an impulse function.
  • the process of transforming the filtered training and run-time images preferably comprises applying the image and the parameters of the corresponding affine pose to a General Affine Transform, such that the transformed training images are properly aligned for computing the template and threshold images, and such that the transformed run-time image is properly aligned with the template and threshold images for comparison thereof.
  • the comparison of the transformed run-time image with the template image is preferably performed by a process referred to as double subtraction.
  • FIG. 1A is a block diagram of the primary components of a statistical training system in accordance with the present invention.
  • FIG. 1B is a block diagram of the primary components of a real-time inspection system in accordance with the present invention.
  • FIG. 2A is a flow diagram representing the steps for statistical training in accordance with the present invention.
  • FIG. 2B is a flow diagram representing the steps for automated object inspection in accordance with the present invention.
  • the present invention applies to both statistical training and run-time inspection in artificial vision systems, taking advantage of the availability of accurate alignment tools capable of quickly generating the affine pose of an object image relative to an alignment model of the object.
  • the affine pose is, in turn, used to generate a transformed image.
  • the transformed image is used to compute a template image and threshold image of the object.
  • the transformed image is compared to the computed template and threshold images to determine object status, i.e., whether the object is within tolerances, or alternatively, whether the object is defective.
  • the present invention employs the General Affine Transformation to exactly transform coordinate systems such that during training, the transformed training images align exactly to allow for the computation of a single template image and single threshold image to define the object, and such that during inspection, the template and threshold images and transformed run-time image align exactly for comparison by double subtraction.
  • the template and threshold images can be accumulated and computed using a single pair of accumulator images, as compared to the binning technique of conventional procedures requiring multiple pairs of accumulator images.
  • all training data is represented in the singular template and threshold images, as compared to binning, whereby training data may be unevenly scattered throughout the binned images.
  • the position and orientation of the object being interrogated may vary along many degrees of freedom. Combinations of these degrees of freedom include the well-known parameters scale, rotation, skew, and translation. Each of these degrees of freedom are represented in the parameters of the well-known General Affine Transformation, which allows for precise mapping between source and destination images.
  • the General Affine Transformation is well known and understood in the art, and is described in Two Dimensional Imaging, Ronald N. Bracewell, Prentice Hall, N.J., 1995, pages 50-67, incorporated herein by reference.
  • FIG. 1A is a block diagram of the primary components of a preferred embodiment of a statistical training system in accordance with the present invention.
  • the statistical training system includes an imaging system 100 and processing system 80 .
  • a series of training images are captured of an object or plurality of objects.
  • a plurality of objects 104 may be presented to the imaging system 100 by means of conveyor 102 .
  • the same object may be presented at a range of positions and orientations, relative to the imaging system 100 .
  • the training objects 104 comprise objects known to be representative samples so as to produce the most accurate statistics.
  • the training object 104 lies in nearly the same position and orientation relative to the imaging system 100 , allowing for increased resolution. In general, the greater the number of training images, the more robust and accurate are the results.
  • an alignment model image for the object is determined.
  • the alignment model image 114 is selected from one of or part of the captured training images, for example the first collected training image.
  • the alignment model image 114 may comprise a synthetic geometric model of the object 104 .
  • the alignment model preferably includes readily distinguishable features of the object to be employed as a reference for alignment, for example corners, faces, or collections of comers and faces of the object.
  • the alignment model image may comprise the entire training image itself, or alternatively may comprise a portion of the training image containing interesting, or otherwise distinguishable, features of the object.
  • the selected alignment model image 114 and each training image 101 are presented to an alignment system 106 for determining the affine pose 107 of each training image with respect to the alignment model image (step 206 of FIG. 2A).
  • Alignment tools for example PATMAXTM, commercially available from Cognex Corporation, Natick, Mass. are readily available to perform the affine pose computation.
  • the affine pose 107 comprises a set of parameters which describe how the training image can be transformed mathematically so as to align the training image with the alignment model image. Assuming a two-dimensional image of a three-dimensional object, the affine parameters apply to six degrees of freedom, to compensate for image scale, shear, rotation, skew, and translation. The parameters are in the form of a 2 ⁇ 2 matrix containing scale, rotation, skew and shear parameters, and a two-dimensional vector containing displacement, or translation, parameters. Note, however, that the present invention is not limited to a system where the object is undergoing all six degrees of freedom. The invention applies equally well to inspecting objects undergoing a subset of the degrees of freedom, for example translation only, or translation and rotation. In which case, the alignment tool provides only those parameters necessary for determining the affine pose of the object. For example the CNLSearchTM tool commercially available from Cognex Corporation provides translation only.
  • Each training image 101 is further applied to a prefilter 108 for the purpose of eliminating errors to be introduced by the affine transformation (step 208 of FIG. 2A).
  • the affine transform can behave as a low pass filter, the filtering effect of which is dependent, for example, on the type of interpolation used and on the object rotation angle.
  • the variance in filtering effect manifests itself especially in high-frequency elements of the image.
  • the purpose of the prefilter is to substantially eliminate such high-frequency effects from the training images before the affine transform is performed, so as to reduce the relative dependence of the affine transform results.
  • the prefilter may comprise a Gaussian or averaging filter, for example, in the form of a convolution kernel to be applied to the training image on a pixel-by-pixel basis, and is preferably matched to the worst-case effects of the interpolator used in the affine transform.
  • the resultant filtered training images 101 may be slightly blurred as a result of prefiltering, but not so much as to adversely affect system performance. If the worst-case effects of the affine interpolator are negligible, the convolution kernel may comprise, for example, an impulse function.
  • each filtered training image 109 and its corresponding affine pose parameters 107 are applied to the General Affine Transform 110 to generate transformed training images 111 .
  • the affine transform 110 assures that each of the transformed training images 111 substantially align to allow for later computation of the template and threshold images defining the object.
  • the affine transform is well-known in the art, and systems and software for computing the affine transform are available commercially.
  • a preferred affine transform computation technique employs the teachings of U.S. patent application Ser. No. 09/027,432, by Igor Reyzin, filed Feb. 20, 1998, assigned to Cognex Corporation, the contents of which are incorporated herein by reference in their entirety.
  • the transformed training images are preferably stored in a pair of accumulators 112 .
  • a template image 113 is computed in step 212 (FIG. 2A).
  • the template image preferably comprises an average image of the transformed training images 111 computed from the first accumulated image 112 .
  • a threshold image 115 is also computed as a linear function of the standard deviation of the average image which, in turn, is computed from the first and second accumulated images 112 .
  • the threshold image may be computed by a linear function of the variance of the average image, or by a linear function of the magnitude of an operator, for example a Sobel operator, applied to the training image. If a Sobel operator is used, then the second accumulator is no longer necessary.
  • the combined template and threshold images 113 , 115 together define the object and acceptable variations thereof.
  • step 216 a determination is made as to whether training is complete. If so, the system is prepared for run-time inspection. If not, additional training images may be captured 220 (FIG. 2A), or further processing of previously-captured images may be performed 219 (FIG. 2B).
  • the invention is inherently flexible with regard to the ordering of training steps. For example, all training images may be initially captured and then applied to the training system 80 as a group. Alternatively, each training image may be captured and individually applied to the system 80 , the results of each iteration being accumulated in accumulators 112 .
  • a template image 113 and threshold image 115 are available for use during run-time inspection.
  • the run-time inspection system comprises an imaging system 300 and a processing system 90 .
  • an object 304 of unknown status is imaged by imaging system 300 to generate a run-time image 301 .
  • the run-time image 301 and alignment model image 314 are presented to alignment system 306 to determine the affine pose 307 of the run-time image 301 with respect to the alignment model image 314 (step 226 of FIG. 2B).
  • the run-time image 301 is likewise prefiltered (step 228 of FIG. 2B) by filter 308 to generate a filtered image 309 .
  • step 230 the affine pose 307 and filtered run-time image 309 are applied to a General Affine Transform 310 to generate a transformed image 311 which aligns substantially with the template and threshold images 113 , 115 computed during statistical training, as described above.
  • the transformed image is next processed in a technique referred to as “double subtraction” to produce an error image (step 232 of FIG. 2B).
  • the first subtraction of the double subtraction provides a mean-corrected image 316 , which can be represented by the following relationship:
  • I represents the transformed run-time image 311
  • Avg represents the template image 113 , for example the average image.
  • a mean-corrected image may be generated using alternative techniques, for example temporal filtering.
  • the second subtraction of the double subtraction (step 232 of FIG. 2B) provides an error image 318 , which can be represented by the following relationship:
  • Threshold represents the threshold image 115 , for example a linear function of the standard deviation image.
  • the error image can be further analyzed (step 233 of FIG. 2B) according to a number of techniques to determine object status. For example, the intensity and number of pixels can be counted and recorded, and a histogram computed, to determine the extent of the error.
  • a morphological operator for example an erosion operator, can be employed to eliminate isolated error pixels, followed by a counting of the error pixels.
  • a connectivity analysis tool or “blob” tool, may be employed. In this technique, connected regions of the error image are labeled, and statistics on the area, position, and orientation of the labeled regions are computed and the statistics can be used to classify the object as good or bad.
  • an object status is determined (step 234 of FIG. 2B) to categorize the inspected object as a defective part (step 238 ), or an acceptable part (step 236 ).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

During statistical training and automated inspection of objects by a machine vision system, a General Affine Transform is advantageously employed to improve system performance. During statistical training, the affine poses of a plurality of training images are determined with respect to an alignment model image. Following filtering to remove high frequency content, the training images and their corresponding affine poses are applied to an affine transformation. The resulting transformed images are accumulated to compute template and threshold images to be used for run-time inspection. During run-time inspection, the affine pose of the run-time image relative to the alignment model image is determined. Following filtering of the run-time image, the run-time image is affine transformed by its affine pose. The resulting transform image is compared with the template and threshold images computed during statistical training to determine object status. In this manner, automated training and inspection is relatively less demanding on system storage, and results in an improvement in system speed and accuracy.

Description

    BACKGROUND OF THE INVENTION
  • Machine or “artificial” vision systems are commonly employed for the automated inspection of objects. In manufacturing applications for example, machine vision systems distinguish those objects manufactured within acceptable tolerance levels (i.e. “good parts”), from objects manufactured outside acceptable tolerance levels (“bad parts”). [0001]
  • Contemporary automated inspection techniques include generally the steps of statistical training and run-time inspection. During statistical training, a number of acceptable objects are presented that can be at a range of positions and orientations relative to the vision system. The system interrogates the objects and formulates statistical images of the acceptable objects. In current systems, the statistical images comprise a template, or average, image, and an acceptable statistical variation of the average image, referred to as threshold image, which is often computed from a variance or standard deviation image. [0002]
  • The information learned about the object during statistical training is, in turn, applied to the run-time inspection of parts of unknown quality. The run-time images obtained during run-time inspection are compared to the template image and the differences are analyzed. Where the analyzed differences exceed a known, predetermined value, the part is considered a defect. Otherwise, the part is acceptable. [0003]
  • Both statistical training and run-time inspection processes include the steps of registration and computation. During registration, an alignment of the object image or “target” relative to an alignment model origin is performed. The output of the alignment process is the spatial coordinates of a predetermined origin of the target relative to the alignment model origin. In contemporary systems, the spatial coordinates comprise a real number including a whole pixel portion and a sub-pixel portion. Translation of the whole pixel portion is relatively straightforward. A well-known technique referred to as “re-windowing” is used to perform this translation. [0004]
  • Computation of the sub-pixel portion, on the other hand, is quite complicated. Conventional training processes employ a technique called sub-pixel binning in which each pixel in the image is quantized into a number of sub-pixels. The goal of this process is to build a template image and a threshold image for each bin, thereby improving the resolution of statistical training. During run-time, the run-time image is compared to the template image. The origin of the run-time image is analyzed during an alignment procedure, and the appropriate pixel bin determined. The run-time image and the binned template image (average) are then compared on a pixel-by-pixel basis, depending on the selected bin. [0005]
  • Computations during training and run-time involving the sub-pixel binned images require a significant amount of storage space. During training, each sub-pixel bin requires at least two accumulators-one for the template image and one for the threshold image. As more sub-pixel bins are added to improve system resolution and therefore lower inspection errors due to sub-pixel misregistration, the system is further burdened by the need for additional storage and image accumulators. Furthermore, the quality of the statistics of each sub-pixel bin is a direct function of the amount and quality of training data stored in the bin. If a bin does not contain much training data, then statistics in that bin are relatively poor and therefore inspection errors are more likely to occur. [0006]
  • SUMMARY OF THE INVENTION
  • The present invention is directed to a method and system for statistical training of a machine vision system on an object, and is further directed to a method and system for automated inspection of objects using the results of such statistical training. The invention addresses the aforementioned limitations of conventional techniques, and provides an inspection process which is relatively less demanding on system storage, and improves system speed and accuracy. [0007]
  • During statistical training and automated inspection of objects by the machine vision system of the present invention, a General Affine Transform is advantageously employed to improve system performance. During statistical training, the affine poses of a plurality of training images are determined with respect to an alignment model image. Following filtering to remove selected spatial frequency content, the training images and their corresponding affine poses are applied to an affine transformation. The resulting transformed images are accumulated to compute template and threshold images to be used for ran-time inspection. [0008]
  • During run-time inspection, the affine pose of the run-time image relative to the alignment model image is determined. Following filtering of the run-time image, the run-time image is affine transformed by its affine pose. The resulting transform image is compared to the template and threshold images computed during statistical training to determine object status. In this manner, automated training and inspection is relatively less demanding on system storage, and results in an improvement in system speed and accuracy. [0009]
  • In one embodiment, the present invention is directed to a method for statistical training of an artificial vision system on an object. A plurality of training images are generated by iteratively imaging one or a number of training objects. The affine pose of each training image with respect to an alignment model image is next determined. Each training image is prefiltered to generate filtered images. Each filtered image is transformed with its corresponding affine pose to generate a plurality of transformed images. A template image and threshold image of the object are then computed from the plurality of transformed images. [0010]
  • In another embodiment, the present invention is directed to a method for automated inspection of an object. The object is first imaged to generate a run-time image. The affine pose of the run-time image with respect to an alignment model image is then determined. The run-time image is prefiltered to generate a filtered image. The filtered image is transformed with its affine pose to generate a transformed image. The transformed image is mean-corrected by the template image, and the mean-corrected image is compared with a threshold image to produce an error image. The error image is analyzed to determine object status. [0011]
  • The alignment model image may be selected as one of or a part of one of the training images collected during statistical training. A geometric model of the object may also be employed as an alignment model image. [0012]
  • In a preferred embodiment, the template image comprises an average image of the transformed training images, while the threshold image comprises an allowable variation of the average image, for example, a linear function of a standard deviation image. [0013]
  • The affine pose is preferably computed by determining the General Affine Transform parameters which accurately map the training and run-time images to the alignment model image. [0014]
  • During prefiltering, the training and run-time images are convolved with a kernel suitable for eliminating high spatial frequency elements from the image that match the worst-case spatial frequency effects of the affine interpolator. In one embodiment, the kernel comprises an impulse function. [0015]
  • The process of transforming the filtered training and run-time images preferably comprises applying the image and the parameters of the corresponding affine pose to a General Affine Transform, such that the transformed training images are properly aligned for computing the template and threshold images, and such that the transformed run-time image is properly aligned with the template and threshold images for comparison thereof. The comparison of the transformed run-time image with the template image is preferably performed by a process referred to as double subtraction. [0016]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other objects, features and advantages of the invention will be apparent from the more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. [0017]
  • FIG. 1A is a block diagram of the primary components of a statistical training system in accordance with the present invention. [0018]
  • FIG. 1B is a block diagram of the primary components of a real-time inspection system in accordance with the present invention. [0019]
  • FIG. 2A is a flow diagram representing the steps for statistical training in accordance with the present invention. [0020]
  • FIG. 2B is a flow diagram representing the steps for automated object inspection in accordance with the present invention. [0021]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • The present invention applies to both statistical training and run-time inspection in artificial vision systems, taking advantage of the availability of accurate alignment tools capable of quickly generating the affine pose of an object image relative to an alignment model of the object. The affine pose is, in turn, used to generate a transformed image. During training, the transformed image is used to compute a template image and threshold image of the object. [0022]
  • During run-time inspection, the transformed image is compared to the computed template and threshold images to determine object status, i.e., whether the object is within tolerances, or alternatively, whether the object is defective. [0023]
  • In this manner, the present invention employs the General Affine Transformation to exactly transform coordinate systems such that during training, the transformed training images align exactly to allow for the computation of a single template image and single threshold image to define the object, and such that during inspection, the template and threshold images and transformed run-time image align exactly for comparison by double subtraction. During training, the template and threshold images can be accumulated and computed using a single pair of accumulator images, as compared to the binning technique of conventional procedures requiring multiple pairs of accumulator images. Furthermore, in the present invention, all training data is represented in the singular template and threshold images, as compared to binning, whereby training data may be unevenly scattered throughout the binned images. By virtue of precise alignment as a result of the affine transform, system accuracy and performance is greatly improved over conventional systems. [0024]
  • During statistical training and run-time inspection, the position and orientation of the object being interrogated may vary along many degrees of freedom. Combinations of these degrees of freedom include the well-known parameters scale, rotation, skew, and translation. Each of these degrees of freedom are represented in the parameters of the well-known General Affine Transformation, which allows for precise mapping between source and destination images. The General Affine Transformation is well known and understood in the art, and is described in [0025] Two Dimensional Imaging, Ronald N. Bracewell, Prentice Hall, N.J., 1995, pages 50-67, incorporated herein by reference.
  • The method and apparatus of the present invention will now be described in further detail with reference to the attached figures. The description of the statistical training system of FIG. 1A refers to the statistical training process steps of FIG. 2A. Likewise, the description of the run-time inspection system of FIG. 1B refers to the run-time inspection process steps of FIG. 2B. [0026]
  • FIG. 1A is a block diagram of the primary components of a preferred embodiment of a statistical training system in accordance with the present invention. The statistical training system includes an [0027] imaging system 100 and processing system 80.
  • In step [0028] 200 (FIG. 2A), a series of training images are captured of an object or plurality of objects. For example a plurality of objects 104 may be presented to the imaging system 100 by means of conveyor 102. Alternatively, the same object may be presented at a range of positions and orientations, relative to the imaging system 100. In a preferred embodiment, the training objects 104 comprise objects known to be representative samples so as to produce the most accurate statistics. Ideally, for each training image, the training object 104 lies in nearly the same position and orientation relative to the imaging system 100, allowing for increased resolution. In general, the greater the number of training images, the more robust and accurate are the results.
  • In step [0029] 204 (FIG. 2A), an alignment model image for the object is determined. In one embodiment, the alignment model image 114 is selected from one of or part of the captured training images, for example the first collected training image. Alternatively, the alignment model image 114 may comprise a synthetic geometric model of the object 104. The alignment model preferably includes readily distinguishable features of the object to be employed as a reference for alignment, for example corners, faces, or collections of comers and faces of the object. The alignment model image may comprise the entire training image itself, or alternatively may comprise a portion of the training image containing interesting, or otherwise distinguishable, features of the object.
  • The selected [0030] alignment model image 114 and each training image 101 are presented to an alignment system 106 for determining the affine pose 107 of each training image with respect to the alignment model image (step 206 of FIG. 2A). Alignment tools, for example PATMAX™, commercially available from Cognex Corporation, Natick, Mass. are readily available to perform the affine pose computation.
  • The affine pose [0031] 107 comprises a set of parameters which describe how the training image can be transformed mathematically so as to align the training image with the alignment model image. Assuming a two-dimensional image of a three-dimensional object, the affine parameters apply to six degrees of freedom, to compensate for image scale, shear, rotation, skew, and translation. The parameters are in the form of a 2×2 matrix containing scale, rotation, skew and shear parameters, and a two-dimensional vector containing displacement, or translation, parameters. Note, however, that the present invention is not limited to a system where the object is undergoing all six degrees of freedom. The invention applies equally well to inspecting objects undergoing a subset of the degrees of freedom, for example translation only, or translation and rotation. In which case, the alignment tool provides only those parameters necessary for determining the affine pose of the object. For example the CNLSearch™ tool commercially available from Cognex Corporation provides translation only.
  • Each [0032] training image 101 is further applied to a prefilter 108 for the purpose of eliminating errors to be introduced by the affine transformation (step 208 of FIG. 2A). The affine transform can behave as a low pass filter, the filtering effect of which is dependent, for example, on the type of interpolation used and on the object rotation angle. The variance in filtering effect manifests itself especially in high-frequency elements of the image. The purpose of the prefilter is to substantially eliminate such high-frequency effects from the training images before the affine transform is performed, so as to reduce the relative dependence of the affine transform results. The prefilter may comprise a Gaussian or averaging filter, for example, in the form of a convolution kernel to be applied to the training image on a pixel-by-pixel basis, and is preferably matched to the worst-case effects of the interpolator used in the affine transform. The resultant filtered training images 101 may be slightly blurred as a result of prefiltering, but not so much as to adversely affect system performance. If the worst-case effects of the affine interpolator are negligible, the convolution kernel may comprise, for example, an impulse function.
  • In step [0033] 210 (FIG. 2A), each filtered training image 109 and its corresponding affine pose parameters 107 are applied to the General Affine Transform 110 to generate transformed training images 111. The affine transform 110 assures that each of the transformed training images 111 substantially align to allow for later computation of the template and threshold images defining the object. The affine transform is well-known in the art, and systems and software for computing the affine transform are available commercially. A preferred affine transform computation technique employs the teachings of U.S. patent application Ser. No. 09/027,432, by Igor Reyzin, filed Feb. 20, 1998, assigned to Cognex Corporation, the contents of which are incorporated herein by reference in their entirety. The transformed training images are preferably stored in a pair of accumulators 112.
  • Following transformation, a [0034] template image 113 is computed in step 212 (FIG. 2A). The template image preferably comprises an average image of the transformed training images 111 computed from the first accumulated image 112. A threshold image 115 is also computed as a linear function of the standard deviation of the average image which, in turn, is computed from the first and second accumulated images 112. Alternatively, the threshold image may be computed by a linear function of the variance of the average image, or by a linear function of the magnitude of an operator, for example a Sobel operator, applied to the training image. If a Sobel operator is used, then the second accumulator is no longer necessary. The combined template and threshold images 113, 115 together define the object and acceptable variations thereof. They are later used during run-time inspection for comparison with a run-time image of the object to determine object status, i.e. determine whether the object is acceptable, or is a reject. Software for computing the template and threshold images is available commercially, for example the GTC™ product available from Cognex Corporation, Natick, Mass.
  • In step [0035] 216 (FIG. 2A), a determination is made as to whether training is complete. If so, the system is prepared for run-time inspection. If not, additional training images may be captured 220 (FIG. 2A), or further processing of previously-captured images may be performed 219 (FIG. 2B). The invention is inherently flexible with regard to the ordering of training steps. For example, all training images may be initially captured and then applied to the training system 80 as a group. Alternatively, each training image may be captured and individually applied to the system 80, the results of each iteration being accumulated in accumulators 112.
  • At the completion of training, a [0036] template image 113 and threshold image 115 are available for use during run-time inspection.
  • With reference to FIGS. 1B and 2B, the run-time inspection system comprises an imaging system [0037] 300 and a processing system 90. At the outset of run-time inspection, an object 304 of unknown status is imaged by imaging system 300 to generate a run-time image 301. As described above, the run-time image 301 and alignment model image 314 are presented to alignment system 306 to determine the affine pose 307 of the run-time image 301 with respect to the alignment model image 314 (step 226 of FIG. 2B). The run-time image 301 is likewise prefiltered (step 228 of FIG. 2B) by filter 308 to generate a filtered image 309.
  • In step [0038] 230 (FIG. 2B), the affine pose 307 and filtered run-time image 309 are applied to a General Affine Transform 310 to generate a transformed image 311 which aligns substantially with the template and threshold images 113, 115 computed during statistical training, as described above.
  • The transformed image is next processed in a technique referred to as “double subtraction” to produce an error image (step [0039] 232 of FIG. 2B). The first subtraction of the double subtraction provides a mean-corrected image 316, which can be represented by the following relationship:
  • Mean-Corrected Image=|I−Avg|
  • where I represents the transformed run-time image [0040] 311, and Avg represents the template image 113, for example the average image. A mean-corrected image may be generated using alternative techniques, for example temporal filtering.
  • The second subtraction of the double subtraction (step [0041] 232 of FIG. 2B) provides an error image 318, which can be represented by the following relationship:
  • Error Image=Mean Corrected Image−Threshold Image
  • where Threshold represents the [0042] threshold image 115, for example a linear function of the standard deviation image.
  • The error image can be further analyzed (step [0043] 233 of FIG. 2B) according to a number of techniques to determine object status. For example, the intensity and number of pixels can be counted and recorded, and a histogram computed, to determine the extent of the error. A morphological operator, for example an erosion operator, can be employed to eliminate isolated error pixels, followed by a counting of the error pixels. Alternatively, a connectivity analysis tool, or “blob” tool, may be employed. In this technique, connected regions of the error image are labeled, and statistics on the area, position, and orientation of the labeled regions are computed and the statistics can be used to classify the object as good or bad.
  • Following analysis of the error image, an object status is determined (step [0044] 234 of FIG. 2B) to categorize the inspected object as a defective part (step 238), or an acceptable part (step 236).
  • While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. [0045]

Claims (39)

What is claimed is:
1. A method for automated inspection of an object comprising:
imaging an object to generate a run-time image;
determining the affine pose of the run-time image with respect to an alignment model image;
prefiltering the run-time image to generate a filtered image;
transforming the filtered image with the affine pose to generate a transformed image;
mean-correcting the transformed image with a template image to provide a mean-corrected image;
comparing the mean-corrected image with a threshold image to produce an error image; and
analyzing the error image to determine object status.
2. The method of
claim 1
further comprising determining a template and threshold image for the object.
3. The method of
claim 2
wherein the template image comprises an average image.
4. The method of
claim 2
wherein the threshold image comprises a function of a standard deviation image.
5. The method of
claim 1
wherein determining the affine pose comprises determining the affine transform parameters which map the run-time image to the alignment model image.
6. The method of
claim 1
wherein prefiltering comprises convolving the run-time image with a kernel adapted to eliminate high-frequency elements from the image.
7. The method of
claim 6
wherein the kernel comprises an impulse function.
8. The method of
claim 1
wherein transforming comprises applying the filtered run-time image to a General Affine Transform, the parameters of which are determined by the affine pose.
9. The method of
claim 1
wherein mean correcting and comparing comprise the technique of double subtraction of the run-time image on a pixel-by-pixel basis, the output of which is an error image represented by:
Mean-Corrected Image=|I−Avg|
and
Error Image=Mean Corrected Image−Threshold Image
where I represents the transformed run-time image, Avg represents the template image, and Threshold represents the threshold image.
10. The method of
claim 1
wherein the template image comprises an average image, and wherein the threshold image comprises a linear transformation of the standard deviation of the average image.
11. The method of
claim 1
wherein the template and threshold images are determined by:
iteratively imaging training objects to generate a plurality of training images;
determining the affine pose of each training image with respect to an alignment model image;
prefiltering each training image to generate filtered training images;
transforming each of the filtered training images with the corresponding affine pose to generate a plurality of transformed training images; and
computing a template image and threshold image of the object from the plurality of transformed training images.
12. The method of
claim 11
further comprising generating an alignment model image from a geometric model of the object.
13. The method of
claim 11
further comprising generating an alignment model image by selecting a portion of one of the training images as an alignment model image.
14. The method of
claim 11
wherein determining the training image affine pose comprises determining the affine transform parameters which map the training image to the alignment model image.
15. In an artificial vision system, a method for statistical training of the system on an object comprising:
iteratively imaging an object to generate a plurality of training images;
determining the affine pose of each training image with respect to an alignment model image;
prefiltering each training image to generate filtered images;
transforming each of the filtered images with the corresponding affine pose to generate a plurality of transformed images; and
computing a template image and threshold image of the object from the plurality of transformed images.
16. The method of
claim 15
further comprising generating an alignment model image from a geometric model of the object.
17. The method of
claim 15
further comprising generating an alignment model image by selecting a portion of one of the training images as an alignment model image.
18. The method of
claim 15
wherein determining the affine pose comprises determining the affine transform parameters which map the training image to the alignment model image.
19. The method of
claim 15
wherein prefiltering comprises convolving the training images with a kernel adapted to eliminate high frequency elements from the image.
20. The method of
claim 19
wherein the kernel comprises an impulse function.
21. The method of
claim 15
wherein the template image and threshold image are each computed in a single accumulator.
22. The method of
claim 15
wherein the template image comprises an average image.
23. The method of
claim 15
wherein the threshold image comprises a linear function of a standard deviation image.
24. The method of
claim 15
wherein transforming comprises applying the filtered run-time image to a General Affine Transform, the parameters of which are determined by the affine pose.
25. The method of
claim 15
further comprising:
during run-time, imaging a run-time object to generate a run-time image;
determining the affine pose of the run-time image with respect to the alignment model image;
prefiltering the run-time image to generate a filtered run-time image;
transforming the filtered run-time image with the affine pose of the run-time image to generate a transformed run-time image;
mean-correcting the transformed run-time image with the template image to provide a mean-corrected image;
comparing the mean-corrected image with the threshold image to produce an error image; and
analyzing the error image to determine object status.
26. A system for automated inspection of an object comprising:
an imaging system for imaging an object to generate a run-time image;
an alignment unit for determining the affine pose of the run-time image with respect to an alignment model image;
a filter for prefiltering the run-time image to generate a filtered image;
an affine transform for transforming the filtered image with the affine pose to generate a transformed image;
a mean-corrector for correcting the transformed image with a template image to provide a mean-corrected image;
a comparator for comparing the mean-corrected image with a threshold image to produce an error image;
an analyzer for analyzing the error image to determine object status.
27. The system of
claim 26
further comprising means for determining a template and threshold image for the object.
28. The system of
claim 26
wherein the affine pose comprises affine transform parameters which map the run-time image to the alignment model image.
29. The system of
claim 26
wherein the mean-correcting and comparator perform double subtraction of the run-time image on a pixel-by-pixel basis, the output of which is an error image represented by:
Mean-Corrected Image=|I−Avg|
and
Error Image=Mean Corrected Image−Threshold Image
where I represents the transformed run-time image, Avg represents the template image, and Threshold represents the threshold image.
30. The system of
claim 26
wherein the template image comprises an average image, and wherein the threshold image comprises a linear transformation of the standard deviation of the average image.
31. The system of
claim 26
further comprising a system for determining the template and threshold images comprising:
an imaging system for iteratively imaging training objects to generate a plurality of training images;
an alignment unit for determining the affine pose of each training image with respect to an alignment model image;
a training filter for prefiltering each training image to generate filtered training images;
a training affine transform for transforming each of the filtered training images with the corresponding affine pose to generate a plurality of transformed training images; and
means for computing a template image and threshold image of the object from the plurality of transformed training images.
32. The system of
claim 31
wherein the alignment model image is generated from a geometric model of the object.
33. The system of
claim 31
wherein the alignment model image is generated by selecting a portion of one of the training images as an alignment model image.
34. The method of
claim 31
wherein the training image affine pose comprises the affine transform parameters which map the training image to the alignment model image.
35. In an artificial vision system, a system for statistical training of the system on an object comprising:
an imaging system for iteratively imaging an object to generate a plurality of training images;
an alignment unit for determining the affine pose of each training image with respect to an alignment model image;
a filter for prefiltering each training image to generate filtered images;
an affine transform for transforming each of the filtered images with the corresponding affine pose to generate a plurality of transformed images; and
means for computing a template image and threshold image of the object from the plurality of transformed images.
36. The system of
claim 35
wherein the alignment model image is generated from a geometric model of the object.
37. The system of
claim 35
wherein the alignment model image is selected form one of the training images.
38. The system of
claim 35
further comprising a template image accumulator and a threshold image accumulator for computing a single template representation of the object.
39. The system of
claim 35
wherein, during run-time, the imaging system further images a run-time object to generate a run-time image, the system further comprising
a run-time alignment unit for determining the affine pose of the run-time image with respect to the alignment model image;
a run-time filter for prefiltering the run-time image to generate a filtered run-time image;
a run-time affine transform for transforming the filtered run-time image with the affine pose of the run-time image to generate a transformed run-time image;
a mean-corrector for correcting the transformed run-time image with the template image to provide a mean-corrected image;
a comparator for comparing the mean-corrected image with the threshold image to produce an error image;
an analyzer for analyzing the error image to determine object status.
US09/141,932 1998-08-28 1998-08-28 Automated inspection of objects undergoing general affine transformation Expired - Lifetime US6421458B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/141,932 US6421458B2 (en) 1998-08-28 1998-08-28 Automated inspection of objects undergoing general affine transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/141,932 US6421458B2 (en) 1998-08-28 1998-08-28 Automated inspection of objects undergoing general affine transformation

Publications (2)

Publication Number Publication Date
US20010012395A1 true US20010012395A1 (en) 2001-08-09
US6421458B2 US6421458B2 (en) 2002-07-16

Family

ID=22497863

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/141,932 Expired - Lifetime US6421458B2 (en) 1998-08-28 1998-08-28 Automated inspection of objects undergoing general affine transformation

Country Status (1)

Country Link
US (1) US6421458B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040135884A1 (en) * 2002-10-23 2004-07-15 Kazuhito Saeki Image processing system and image processing method
WO2005081188A1 (en) * 2004-02-16 2005-09-01 Fondmatic - Societa' Per Azioni Method for checking images
US20060124012A1 (en) * 2002-12-20 2006-06-15 Bernhard Frei Method and device for the real time control of print images
US20100225666A1 (en) * 2009-03-04 2010-09-09 VISIONx INC. Digital optical comparator
WO2012000650A1 (en) * 2010-06-28 2012-01-05 Precitec Kg A method for classifying a multitude of images recorded by a camera observing a processing area and laser material processing head using the same
US20120027289A1 (en) * 2010-08-02 2012-02-02 Keyence Corporation Image Measurement Device, Method For Image Measurement, And Computer Readable Medium Storing A Program For Image Measurement
WO2013000081A1 (en) * 2011-06-26 2013-01-03 UNIVERSITé LAVAL Quality control and assurance of images
US9529824B2 (en) * 2013-06-05 2016-12-27 Digitalglobe, Inc. System and method for multi resolution and multi temporal image search
US20170293611A1 (en) * 2016-04-08 2017-10-12 Samsung Electronics Co., Ltd. Method and device for translating object information and acquiring derivative information
CN109299758A (en) * 2018-07-27 2019-02-01 深圳市中兴系统集成技术有限公司 A kind of intelligent polling method, electronic equipment, intelligent inspection system and storage medium
US11010888B2 (en) * 2018-10-29 2021-05-18 International Business Machines Corporation Precision defect detection based on image difference with respect to templates

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016539B1 (en) 1998-07-13 2006-03-21 Cognex Corporation Method for fast, robust, multi-dimensional pattern recognition
JP2000260699A (en) * 1999-03-09 2000-09-22 Canon Inc Position detecting device and semiconductor exposure apparatus using the position detecting device
US6850636B1 (en) * 1999-05-25 2005-02-01 Nichiha Corporation Surface inspection system
US7225153B2 (en) * 1999-07-21 2007-05-29 Longitude Llc Digital options having demand-based, adjustable returns, and trading exchange therefor
US6804416B1 (en) * 2001-03-16 2004-10-12 Cognex Corporation Method and system for aligning geometric object models with images
US7120301B2 (en) * 2002-04-10 2006-10-10 National Instruments Corporation Efficient re-sampling of discrete curves
US7133538B2 (en) * 2002-04-10 2006-11-07 National Instruments Corporation Pattern matching utilizing discrete curve matching with multiple mapping operators
US7171048B2 (en) * 2002-04-10 2007-01-30 National Instruments Corporation Pattern matching system utilizing discrete curve matching with a mapping operator
US7327887B2 (en) * 2002-04-10 2008-02-05 National Instruments Corporation Increasing accuracy of discrete curve transform estimates for curve matching
US7139432B2 (en) 2002-04-10 2006-11-21 National Instruments Corporation Image pattern matching utilizing discrete curve matching with a mapping operator
US7136505B2 (en) * 2002-04-10 2006-11-14 National Instruments Corporation Generating a curve matching mapping operator by analyzing objects of interest and background information
US7630560B2 (en) * 2002-04-10 2009-12-08 National Instruments Corporation Increasing accuracy of discrete curve transform estimates for curve matching in four or more dimensions
US7158677B2 (en) * 2002-08-20 2007-01-02 National Instruments Corporation Matching of discrete curves under affine transforms
US7120314B2 (en) * 2003-01-15 2006-10-10 Xerox Corporation Systems and methods for obtaining image shear and skew
JP3842233B2 (en) * 2003-03-25 2006-11-08 ファナック株式会社 Image processing apparatus and robot system
US7269286B2 (en) * 2003-06-05 2007-09-11 National Instruments Corporation Discrete curve symmetry detection
US7936928B2 (en) * 2003-06-05 2011-05-03 National Instruments Corporation Mutual symmetry detection
US7190834B2 (en) 2003-07-22 2007-03-13 Cognex Technology And Investment Corporation Methods for finding and characterizing a deformed pattern in an image
US8081820B2 (en) 2003-07-22 2011-12-20 Cognex Technology And Investment Corporation Method for partitioning a pattern into optimized sub-patterns
US8437502B1 (en) 2004-09-25 2013-05-07 Cognex Technology And Investment Corporation General pose refinement and tracking tool
US7391930B2 (en) * 2004-12-17 2008-06-24 Primax Electronics Ltd. Angle de-skew device and method thereof
US7796800B2 (en) * 2005-01-28 2010-09-14 Hewlett-Packard Development Company, L.P. Determining a dimensional change in a surface using images acquired before and after the dimensional change
US7412106B1 (en) * 2005-06-25 2008-08-12 Cognex Technology And Investment Corporation Methods for locating and decoding distorted two-dimensional matrix symbols
US7878402B2 (en) * 2005-12-20 2011-02-01 Cognex Technology And Investment Corporation Decoding distorted symbols
US8254676B2 (en) * 2007-12-31 2012-08-28 Morpho Detection, Inc. Methods and systems for identifying a thin object
DE102010043477A1 (en) * 2010-11-05 2012-05-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and X-ray inspection system for testing identical components using X-radiation
US8600192B2 (en) 2010-12-08 2013-12-03 Cognex Corporation System and method for finding correspondence between cameras in a three-dimensional vision system
US11488322B2 (en) 2010-12-08 2022-11-01 Cognex Corporation System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same
US9124873B2 (en) 2010-12-08 2015-09-01 Cognex Corporation System and method for finding correspondence between cameras in a three-dimensional vision system
DE112013002024T5 (en) * 2012-04-10 2015-03-05 Mahle Powertrain, Llc Color vision inspection system and method for inspecting a vehicle
US8971663B2 (en) 2012-05-21 2015-03-03 Cognex Corporation System and method for producing synthetic golden template image for vision system inspection of multi-layer patterns
US9679224B2 (en) 2013-06-28 2017-06-13 Cognex Corporation Semi-supervised method for training multiple pattern recognition and registration tool models
US10074036B2 (en) * 2014-10-21 2018-09-11 Kla-Tencor Corporation Critical dimension uniformity enhancement techniques and apparatus
US9639781B2 (en) * 2015-04-10 2017-05-02 Cognex Corporation Systems and methods for classification and alignment of highly similar or self-similar patterns
CN105258647B (en) * 2015-07-26 2017-11-21 湖北工业大学 A kind of visible detection method of automobile lock riveting point
US9996771B2 (en) 2016-02-15 2018-06-12 Nvidia Corporation System and method for procedurally synthesizing datasets of objects of interest for training machine-learning models
JP6333871B2 (en) * 2016-02-25 2018-05-30 ファナック株式会社 Image processing apparatus for displaying an object detected from an input image
US10607108B2 (en) 2018-04-30 2020-03-31 International Business Machines Corporation Techniques for example-based affine registration

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4805123B1 (en) * 1986-07-14 1998-10-13 Kla Instr Corp Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4849679A (en) * 1987-12-31 1989-07-18 Westinghouse Electric Corp. Image processing system for an optical seam tracker
US5537669A (en) 1993-09-30 1996-07-16 Kla Instruments Corporation Inspection method and apparatus for the inspection of either random or repeating patterns
US5640200A (en) * 1994-08-31 1997-06-17 Cognex Corporation Golden template comparison using efficient image registration
US5793901A (en) * 1994-09-30 1998-08-11 Omron Corporation Device and method to detect dislocation of object image data

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1413983A3 (en) * 2002-10-23 2004-09-22 Keyence Corporation Image processing system and method for workpiece measurement
US7403218B2 (en) 2002-10-23 2008-07-22 Keyence Corporation Image processing system and image processing method
US20040135884A1 (en) * 2002-10-23 2004-07-15 Kazuhito Saeki Image processing system and image processing method
US20060124012A1 (en) * 2002-12-20 2006-06-15 Bernhard Frei Method and device for the real time control of print images
EP2259219A3 (en) * 2004-02-16 2015-06-24 Fondmatic - Societa' per Azioni Method for checking images
WO2005081188A1 (en) * 2004-02-16 2005-09-01 Fondmatic - Societa' Per Azioni Method for checking images
US9292915B2 (en) 2009-03-04 2016-03-22 VISIONx INC. Digital optical comparator
US20100225666A1 (en) * 2009-03-04 2010-09-09 VISIONx INC. Digital optical comparator
US8917320B2 (en) 2009-03-04 2014-12-23 VISIONx INC. Digital optical comparator
US9105077B2 (en) 2010-06-28 2015-08-11 Precitec Kg Method for classifying a multitude of images recorded by a camera observing a processing area and laser material processing head using the same
WO2012000650A1 (en) * 2010-06-28 2012-01-05 Precitec Kg A method for classifying a multitude of images recorded by a camera observing a processing area and laser material processing head using the same
US8503757B2 (en) * 2010-08-02 2013-08-06 Keyence Corporation Image measurement device, method for image measurement, and computer readable medium storing a program for image measurement
US20120027289A1 (en) * 2010-08-02 2012-02-02 Keyence Corporation Image Measurement Device, Method For Image Measurement, And Computer Readable Medium Storing A Program For Image Measurement
US20140126790A1 (en) * 2011-06-26 2014-05-08 Universite Laval Quality control and assurance of images
US9286547B2 (en) * 2011-06-26 2016-03-15 UNIVERSITé LAVAL Quality control and assurance of images
WO2013000081A1 (en) * 2011-06-26 2013-01-03 UNIVERSITé LAVAL Quality control and assurance of images
US9529824B2 (en) * 2013-06-05 2016-12-27 Digitalglobe, Inc. System and method for multi resolution and multi temporal image search
US20170293611A1 (en) * 2016-04-08 2017-10-12 Samsung Electronics Co., Ltd. Method and device for translating object information and acquiring derivative information
US10990768B2 (en) * 2016-04-08 2021-04-27 Samsung Electronics Co., Ltd Method and device for translating object information and acquiring derivative information
CN109299758A (en) * 2018-07-27 2019-02-01 深圳市中兴系统集成技术有限公司 A kind of intelligent polling method, electronic equipment, intelligent inspection system and storage medium
US11010888B2 (en) * 2018-10-29 2021-05-18 International Business Machines Corporation Precision defect detection based on image difference with respect to templates

Also Published As

Publication number Publication date
US6421458B2 (en) 2002-07-16

Similar Documents

Publication Publication Date Title
US6421458B2 (en) Automated inspection of objects undergoing general affine transformation
US6381366B1 (en) Machine vision methods and system for boundary point-based comparison of patterns and images
Paglieroni Distance transforms: Properties and machine vision applications
CN111915485B (en) Rapid splicing method and system for feature point sparse workpiece images
US6687402B1 (en) Machine vision methods and systems for boundary feature comparison of patterns and images
Ortin et al. Indoor robot motion based on monocular images
CN115131587A (en) Template matching method of gradient vector features based on edge contour
CN105957082A (en) Printing quality on-line monitoring method based on area-array camera
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN111251336A (en) A dual-arm collaborative intelligent assembly system based on visual positioning
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN111738320B (en) Shielded workpiece identification method based on template matching
EP4226225B1 (en) A line clearance system
DE102015113434A1 (en) Method for object localization and pose estimation for an object of interest
CN114881945A (en) Method and system for automatically searching and extracting workpiece weld joint feature points under complex background
CN114998314A (en) PCB (printed Circuit Board) defect detection method based on computer vision
US6577775B1 (en) Methods and apparatuses for normalizing the intensity of an image
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN119897684B (en) Memory assembly method, system, electronic device, storage medium and product
Paudel et al. Robust and optimal sum-of-squares-based point-to-plane registration of image sets and structured scenes
US6714670B1 (en) Methods and apparatuses to determine the state of elements
CN115760721B (en) Automatic chip component identification and positioning method based on computer vision
Pei et al. Welding component identification and solder joint inspection of automobile door panel based on machine vision
Lee et al. MATT-GS: Masked Attention-based 3DGS for Robot Perception and Object Detection
CN118351059A (en) Circuit board detection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: COGNEX CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MICHAEL, DAVID J.;REYZIN, IGOR;REEL/FRAME:009577/0389;SIGNING DATES FROM 19981023 TO 19981103

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12