WO2021064893A1 - ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム - Google Patents
ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム Download PDFInfo
- Publication number
- WO2021064893A1 WO2021064893A1 PCT/JP2019/038908 JP2019038908W WO2021064893A1 WO 2021064893 A1 WO2021064893 A1 WO 2021064893A1 JP 2019038908 W JP2019038908 W JP 2019038908W WO 2021064893 A1 WO2021064893 A1 WO 2021064893A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- defect candidate
- defect
- images
- work
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present invention while moving a work such as a vehicle body, which is a surface defect detection target, relative to an illuminating device, a light-dark pattern of illumination light is applied to a portion to be measured such as a painted surface of the work.
- the present invention relates to a work surface defect detection device and a detection method for detecting surface defects, a work surface inspection system, and a program based on a plurality of images obtained when the measurement site is continuously imaged by an imaging means.
- Patent Document 1 As described above, as a technique for detecting surface defects of the work by continuously imaging the measured portion of the work by the imaging means while moving the work such as the vehicle body relative to the lighting device. Patent Document 1 is known. However, the technique described in Patent Document 1 does not describe a specific defect detection algorithm using an image obtained by an imaging means.
- Patent Document 2 describes that a plurality of images are fused into one image to detect surface defects.
- Patent Document 3 a defect with high accuracy is obtained by processing a plurality of images captured in time series and detecting the defect based on the moving state of the defect candidate points extracted from each image.
- a defect inspection method for the surface to be inspected that enables detection has been proposed. Specifically, the surface to be inspected 1 is moved so that the imaging portion A moves with the passage of time, images from the imaging means 2 are input at arbitrary times, and the input images are different from each other. Defect candidate points B are extracted from each of the two images, an error range is set around the defect candidate points B of the image captured later, and the defect candidate points B of the previously captured image are set as the surface to be inspected or the defect candidate points B of the previously captured image.
- the defect candidate point B which is listed as a match candidate more than a predetermined number of times, is determined to be a defect existing on the surface to be inspected 1.
- Patent Document 2 has a problem that the defect detection process is complicated, for example, it is necessary to perform the first to third different conversion processes on the fused image.
- the inspection method described in Patent Document 3 is basically a process based on a single image, so that the amount of information is insufficient, and there is a problem that small surface defects cannot be discriminated with sufficient accuracy.
- the present invention has been made to solve such a problem, and is a work piece capable of stably detecting small surface defects with high accuracy without requiring a plurality of complicated conversion processes and the like. It is an object of the present invention to provide a surface defect detection device and a detection method, a surface inspection system for a workpiece, and a program.
- An image acquisition means for acquiring a plurality of images of a portion to be measured of the work in a state where the light / dark pattern by the lighting device is relatively moved with respect to the work whose surface defect is to be detected, and the image acquisition.
- the temporary defect candidate extraction means that extracts the temporary defect candidate for each image acquired by the means and the plurality of images from which the temporary defect candidate is extracted by the temporary defect candidate extraction means, the number of images including the temporary defect candidate is If it exists above a preset threshold value, a defect candidate determining means for determining the provisional defect candidate as a defect candidate and a plurality of images including the defect candidate determined by the defect candidate determining means are combined and synthesized.
- a work surface defect detecting device including an image synthesizing means for creating an image and a detecting means for detecting defects based on the composite image created by the image synthesizing means.
- the defect candidate determining means increases the number of images including the temporary defect candidate from the first threshold value during the process of determining the temporary defect candidate as the defect candidate.
- the work surface defect detection device according to item 1 above, wherein the process of determining a temporary defect candidate as a defect candidate is stopped when the value is too small to reach a preset second threshold value.
- the defect candidate determining means determines the temporary defect candidate as a defect candidate if the position of the temporary defect candidate is equal to or greater than a preset threshold value among the plurality of images. 2.
- the work surface defect detecting device according to 2.
- the position of the temporary defect candidate is represented by the coordinates of the temporary defect candidate, the coordinates of the temporary defect candidate extracted by the temporary defect candidate extraction means are obtained, and a plurality of temporary defect candidates following the extracted image.
- the coordinate estimation means for calculating the coordinates to which the coordinates of the temporary defect candidate move to obtain the estimated coordinates is provided, and the defect candidate determining means is described in each of the subsequent images. It is determined whether or not the estimated coordinates calculated by the coordinate estimation means correspond to the temporary defect candidate in the image, and among the subsequent images, the image in which the estimated coordinates and the temporary defect candidate correspond is preset.
- the work surface defect detection device determines the provisional defect candidate as a defect candidate if it exists above the threshold. (5) For each defect candidate determined by the defect candidate determining means, a predetermined area around the defect candidate is cut out as an estimation area from a plurality of images including the defect candidate, and a plurality of estimation areas are obtained.
- the image group creating means for creating an image group is provided, and the image synthesizing means creates a composite image by synthesizing a plurality of estimated region images created by the image group creating means for each defect candidate.
- the work surface defect detection device according to any one of.
- the synthesizing means aligns a plurality of estimated region images at the time of creating a synthesizing image by superimposing the center coordinates of the images.
- the synthesizing means corrects the center coordinates of each image based on the position of each center coordinate from the boundary of the light / dark pattern of the image corresponding to the light / dark pattern of the lighting device, and superimposes the corrected center coordinates.
- the compositing means maximizes the evaluation value from a plurality of combinations in which the center coordinates of each image are shifted in the X and Y coordinate directions for the alignment of the plurality of estimated region images at the time of composing the composite image.
- the temporary defect candidate extraction means extracts temporary defect candidates by at least two types of a routine for extracting small defects and a processing routine for extracting loose convex defects, and the temporary defects detected by the routine for loose convex defects.
- the work surface defect detection device according to any one of the above items 1 to 9, which corrects the defect size for the candidate.
- An illuminating device that irradiates a work whose surface defect is to be detected with illumination light having a light and dark pattern, and an imaging means that captures the reflected light of the illumination light by the illumination device from a portion to be measured of the work.
- the image acquisition means includes a control means for controlling the image pickup means and a surface defect detection device for the work according to any one of the above items 1 to 11, and the image acquisition means is used from the image pickup means to the work.
- a surface inspection system for workpieces that acquires multiple images of the area to be measured.
- the number of images including the temporary defect candidate is If it exists above a preset threshold, a defect candidate determination step for determining the provisional defect candidate as a defect candidate and a plurality of images including the defect candidate determined by the defect candidate determination step are combined and synthesized.
- a work surface defect detection method in which a work surface defect detection device executes an image composition step for creating an image and a detection step for performing defect detection based on the composite image created by the image composition step.
- the defect candidate determining means increases the number of images including the temporary defect candidate from the first threshold value during the process of determining the temporary defect candidate as the defect candidate.
- the defect candidate determination step if the position of the temporary defect candidate is equal to or greater than a preset threshold value among the plurality of images, the temporary defect candidate is determined as the defect candidate. 14.
- the position of the temporary defect candidate is represented by the coordinates of the temporary defect candidate, the coordinates of the temporary defect candidate extracted by the temporary defect candidate extraction step are obtained, and a plurality of temporary defect candidates following the extracted image.
- Each of the images is provided with a coordinate estimation step of calculating to which coordinate the coordinates of the temporary defect candidate move to obtain the estimated coordinates, and in the defect candidate determination step, in each of the subsequent images, the said It is determined whether or not the estimated coordinates calculated by the coordinate estimation step correspond to the temporary defect candidate in the image, and among the subsequent images, the image in which the estimated coordinates and the temporary defect candidate correspond is preset.
- a predetermined area around the defect candidate is cut out as an estimation area from a plurality of images including the defect candidate, and a plurality of estimation areas are obtained.
- the image group creation step for creating an image group is provided, and in the image composition step, a plurality of estimated region images created by the image group creation step are combined for each defect candidate to create a composite image.
- the center coordinates of each image are corrected based on the position of each center coordinate from the boundary of the light / dark pattern of the image corresponding to the light / dark pattern of the lighting device, and the corrected center coordinates are overlapped.
- Matching The method for detecting surface defects of a work according to the preceding item 19.
- the evaluation value is maximized from a plurality of combinations in which the center coordinates of each image are shifted in the X and Y coordinate directions for the alignment of the plurality of estimated region images at the time of creating the composite image.
- temporary defect candidate extraction step temporary defect candidates are extracted by at least two types of routines, a routine for extracting small defects and a processing routine for extracting loose convex defects, and temporary defects detected by the routine for loose convex defects.
- the portion to be measured of the work is in a state where the light / dark pattern by the lighting device is relatively moved with respect to the work to which the surface defect is detected.
- a plurality of images of the above are acquired, and temporary defect candidates are extracted for each of the acquired images. If the number of images including the provisional defect candidates exists in or more than a preset threshold value among the plurality of images from which the provisional defect candidates have been extracted, the provisional defect candidates are determined as defect candidates, and the determined defect candidates are included.
- a composite image is created by synthesizing a plurality of images. Then, defect detection is performed based on the created composite image.
- the composite image includes information of a plurality of images. Therefore, since defect detection can be performed using a large amount of information for one defect candidate, even a small surface defect can be detected with high accuracy and stability while suppressing over-detection and erroneous detection. it can.
- the number of images including the temporary defect candidate is the first threshold value in the process of determining the temporary defect candidate as the defect candidate.
- the process of determining the temporary defect candidate as a defect candidate is stopped, so that it is useless even when the temporary defect candidate is not likely to be a defect candidate. It is possible to prevent the continuous processing, further reduce the processing load, and further improve the detection accuracy.
- the temporary defect candidate is determined as the defect candidate. Will be done.
- the coordinates of the extracted temporary defect candidates are obtained, and the temporary defect candidates are obtained for each of a plurality of images following the extracted image.
- Estimated coordinates are obtained by calculating to which coordinates the candidate coordinates move, and it is determined whether or not these estimated coordinates correspond to the temporary defect candidates in the image. If the image corresponding to the defect candidate exists at least a preset threshold value, the temporary defect candidate is determined as the defect candidate. Therefore, among the plurality of images, the image corresponding to the position of the temporary defect candidate is preset. It is possible to reliably determine whether or not there is a threshold value or more.
- a plurality of estimated region images cut out from a plurality of images including defect candidates are combined into one image, and defects are formed based on the combined image. Since the detection is performed, only the region necessary for the defect detection can be synthesized, and the defect detection can be performed with high accuracy.
- the surface is based on at least one composite image of a standard deviation image, a phase image, a phase difference image, a maximum value image, a minimum value image, and an average value image. Defects can be detected.
- the alignment of a plurality of estimated region images at the time of creating a composite image is performed by superimposing the center coordinates of each image, so that a highly accurate composite image can be obtained. It can be created, and by extension, surface defect detection can be performed with high accuracy.
- the center coordinates of each image are corrected based on the position of each center coordinate from the boundary of the light / dark pattern of the image corresponding to the light / dark pattern of the lighting device.
- the accuracy of the composite image can be further improved.
- the alignment of the plurality of estimated region images at the time of creating the composite image is such that the center coordinates of each image are shifted to at least one of the X coordinate and the Y coordinate directions. Since the evaluation value is maximized from the plurality of combinations, it is possible to create a composite image with higher accuracy, and it is possible to detect surface defects with higher accuracy.
- provisional defect candidates are extracted by at least two types of routines, a routine for extracting small defects and a processing routine for extracting loose convex defects, and the routine for loose convex defects is used. Since the defect size is corrected for the detected temporary defect candidates, the temporary defect candidates can be reliably extracted.
- the thread lumps are formed based on the amount of deviation of the two-dimensional defect signal from the roundness of the two-dimensional shape in the phase image of the defect candidate. It can be determined.
- a plurality of images including defect candidates can be combined into one image, and a computer can be made to execute a process of detecting defects based on the combined image.
- FIG. 1 It is a top view which shows the structural example of the surface inspection system of the work which concerns on one Embodiment of this invention. It is a vertical sectional view of the lighting frame when viewed from the front in the traveling direction of the work. It is a vertical sectional view of the camera frame when viewed from the front in the traveling direction of the work. It is a top view which shows the electrical structure in the surface inspection system of the work shown in FIG. It is a flowchart which shows the processing of the whole surface defect inspection system of a work.
- A is an image continuously acquired from one camera in chronological order
- (B) is a diagram showing a state in which the coordinates of a temporary defect candidate are estimated in a subsequent image with respect to the first image of (A), (C).
- (D) is a diagram showing a process of superimposing each image of the estimated region image group to create a composite image
- (D) is a diagram showing another process of superimposing each image of the estimated region image group to create a composite image. is there. It is a figure for demonstrating the process which corrects the center coordinate of an estimated area image from the boundary of a bright band part and a dark band part in an image according to the position of a defect candidate.
- (A) to (D) are diagrams showing a process of superimposing each image of an estimated region image group in a different manner to create a composite image. It is a figure for demonstrating an example of the extraction process of a tentative defect candidate.
- FIG. 1 is a plan view showing a configuration example of a surface inspection system for a work according to an embodiment of the present invention.
- the work 1 is a vehicle body
- the portion to be measured of the work 1 is a painted surface on the vehicle body surface, and a surface defect of the painted surface is detected is shown.
- the surface of the vehicle body is subjected to surface treatment, metallic coating, clear coating, etc. to form a coating film layer with a multi-layer structure. Occurs.
- the work 1 is not limited to the vehicle body and may be a work other than the vehicle body.
- the portion to be measured may be a surface other than the painted surface.
- This inspection system includes a work moving mechanism 2 that continuously moves the work 1 in the direction of arrow F at a predetermined speed.
- a work moving mechanism 2 that continuously moves the work 1 in the direction of arrow F at a predetermined speed.
- two lighting frames 3 and 3 are fixed to the support bases 4 and 4 in the direction orthogonal to the moving direction of the work. It is installed in the installed state. Further, the lighting frames 3 and 3 are connected to each other by two connecting members 5 and 5.
- the number of lighting frames is not limited to two.
- Each lighting frame 3 is formed in a gate shape as shown in the vertical sectional view of FIG. 2 when viewed from the front in the traveling direction of the vehicle body, and each lighting frame 3 has a lighting unit 6 for illuminating the work 1. It is installed.
- the lighting unit 6 has linear lighting units attached so as to surround the peripheral surface excluding the lower surface of the work 1 along the inner shape of the lighting frame 3, and a plurality of the linear lighting units. Are attached to the lighting frame 3 at equal intervals in the moving direction of the work 1. Therefore, the illumination unit 6 diffusely illuminates the work with the illumination light of the light-dark fringe pattern composed of the illumination portion and the non-illumination portion which are alternately present in the moving direction of the work 1.
- the lighting unit may be curved.
- a camera frame 7 is attached to the middle portion of the two front and rear lighting frames 3 and 3 in a state where both lower ends in a direction orthogonal to the moving direction of the work are fixed to the supports 4 and 4. Further, the camera frame 7 is formed in a gate shape as shown in the vertical cross-sectional view of FIG. 3 when viewed from the front in the traveling direction of the work 1, and the camera frame 7 is formed on the lower surface of the work 1 along the inner shape thereof.
- a plurality of cameras 8 as imaging means are attached so as to surround the peripheral surface excluding the above.
- the work 1 is attached to the camera frame 7 in a state where the work 1 is moved at a predetermined speed by the work moving mechanism 2 and the work 1 is diffusely illuminated by the illumination light of the light and dark stripe pattern by the lighting unit 6.
- a plurality of cameras 8 continuously image each part of the work 1 in the circumferential direction as a measurement portion. The imaging is performed so that most of the imaging range overlaps in the front and rear imaging. As a result, each camera 8 outputs a plurality of images in which the position of the measured portion of the work 1 is continuously deviated in the moving direction of the work 1.
- FIG. 4 is a plan view showing the electrical configuration of the work surface inspection system shown in FIG.
- the first position sensor 11, the vehicle type information detection sensor 12, the second position sensor 13, the vehicle body speed sensor 14, and the third position sensor are entered in this order from the entry side along the moving direction of the work 1. 15 is provided.
- the first position sensor 11 is a sensor that detects that the next work 1 has approached the inspection area.
- the vehicle body information detection sensor 12 is a sensor that detects the ID, vehicle type, color, destination information, and the like of the vehicle body to be inspected.
- the second position sensor 13 is a sensor that detects that the work 1 has entered the inspection area.
- the vehicle body speed sensor 14 detects the moving speed of the work 1 and monitors the position of the work 1 by calculation, but the position sensor may directly monitor the position of the work.
- the third position sensor 15 is a sensor that detects that the work 1 has left the inspection area.
- the work surface defect inspection system further includes a master PC 21, a defect detection PC 22, a HUB 23, a NAS (Network Attached Storage) 24, a display 25, and the like.
- the master PC 21 is a personal computer that comprehensively controls the entire surface defect inspection system of the work, and includes a processor such as a CPU, a memory such as a RAM, a storage device such as a hard disk, and other hardware and software.
- the master PC 21 includes a movement control unit 211, a lighting unit control unit 212, a camera control unit 213, and the like as one of the functions of the CPU.
- the movement control unit 211 controls the movement stop and movement speed of the movement mechanism 2
- the lighting unit control unit 212 controls the lighting of the lighting unit 6
- the camera control unit 213 controls the imaging of the camera 8. Imaging by the camera 8 is continuously performed in response to a trigger signal continuously transmitted from the master PC 21 to the camera 8.
- the defect detection PC 22 is a surface defect detection device that executes surface defect detection processing, and is composed of a processor such as a CPU, a memory such as a RAM, a storage device such as a hard disk, and a personal computer provided with other hardware and software. ing.
- the defect detection PC 22 has an image acquisition unit 221, a temporary defect candidate extraction unit 222, a coordinate estimation unit 223, a defect candidate determination unit 224, an image group creation unit 225, an image synthesis unit 226, and a defect detection unit 227 as one of the functions of the CPU. Etc. are provided.
- the image acquisition unit 221 acquires a plurality of images that are continuously imaged by the camera 8 in time series and transmitted from the camera 8 by GigE (Gigabit Ethernet (registered trademark)).
- the temporary defect candidate extraction unit 222 extracts a temporary defect candidate based on a plurality of images from the camera 8 acquired by the image acquisition unit 221, and the coordinate estimation unit 223 coordinates the coordinates in the subsequent image of the extracted temporary defect candidate.
- the defect candidate determination unit 224 determines the defect candidate by matching the estimated coordinates of the temporary defect candidate with the actual temporary defect candidate, and the image group creation unit 225 cuts out the area around the determined defect candidate. To create an image group consisting of a plurality of images for synthesizing images.
- the image synthesizing unit 226 synthesizes each image of the created image group into one image, and the defect detecting unit 227 detects and discriminates a defect from the combined image. Specific surface defect detection processing by each of these parts in the defect detection PC 22 will be described later.
- the NAS24 is a storage device on the network and stores various data.
- the display 25 displays the surface defects detected by the defect detection PC 22 in a state corresponding to the position information of the vehicle body which is the work 1, and the HUB 23 is the master PC 21, the defect detection PC 22, NAS24, the display 25, etc. It has a function to send and receive data to and from.
- the master PC 21 continuously transmits a trigger signal to each camera 8 in a state where the work 1 is illuminated from the surroundings by the illumination light of the light and dark stripe pattern by the lighting unit 6. Then, each camera 8 continuously images the part to be measured of the work 1.
- the master PC 21 sets the interval between the trigger signals, in other words, the interval between the imaging intervals so that most of the imaging ranges overlap in the front and rear imaging. By such imaging, each camera 8 can obtain a plurality of images in which the position of the portion to be measured of the work 1 is continuously deviated in the moving direction according to the movement of the work 1.
- Such a plurality of images are obtained not only when only the work 1 moves with respect to the fixed lighting unit 6 and the camera 8 as in the present embodiment, but also when the work 1 is fixed and the lighting unit 6 and the camera 8 are moved. It can also be obtained from the camera 8 when the work 1 is moved with respect to the work 1 or when the work 1 and the camera 8 are fixed and the lighting unit 6 is moved. That is, by moving at least one of the work 1 and the lighting unit 6, the light / dark pattern of the lighting unit 6 may move relative to the work 1.
- the plurality of images obtained by each camera 8 are transmitted to the defect detection PC 22, and the image acquisition unit 221 of the defect detection PC 22 acquires a plurality of images transmitted from each camera 8.
- the defect detection PC 22 executes a surface defect detection process using these images.
- step S01 the master PC 21 determines whether or not the work 1 has approached the inspection range based on the signal of the first position sensor 11, and if not (NO in step S01), stays in step S01.
- step S02 the master PC 21 acquires individual information such as the ID, vehicle type, color, and destination information of the vehicle body to be inspected based on the signal from the vehicle body information detection sensor 12.
- step S03 for example, the parameters of the inspection system, the inspection range on the vehicle body, and the like are set as the initial information settings.
- step S04 the master PC determines whether or not the work 1 has entered the inspection range based on the signal of the second position sensor 13, and if not (NO in step S04), stays in step S04.
- step S05 the moving work 1 is photographed by the camera 8 in chronological order with most of the imaging range overlapping.
- step S06 the pre-stage processing in the surface defect detection processing by the defect detection PC 22 is performed. The pre-stage processing will be described later.
- step S07 it is determined whether or not the work 1 has exited the inspection range based on the signal of the third position sensor 15. If there is no exit (NO in step S07), the process returns to step S05, and imaging and pre-stage processing are continued.
- step S08 the subsequent process in the surface defect detection process by the defect detection PC 22 is performed. That is, in this embodiment, the post-stage processing is performed after all the imaging of the work 1 is completed. The post-stage processing will be described later.
- step S09 the result of the surface defect detection processing is displayed on the display 25 or the like.
- the defect detection PC 22 acquires a plurality of images in which the position of the measured portion of the work 1 is continuously deviated in the moving direction from each camera 8. This situation is shown in FIG. A11 to A17 in FIG. 6A are images continuously acquired from one camera 8 in chronological order.
- the light-dark pattern in which the light bands (white part) extending in the vertical direction and the dark bands (black part) alternately exist in the horizontal direction displayed in the image corresponds to the light-dark fringe pattern of the illumination light by the lighting unit 6. There is.
- the temporary defect candidate extraction unit 222 of the defect detection PC 22 extracts temporary defect candidates from each image. Extraction of temporary defect candidates is executed by performing processing such as background removal and binarization, for example. In this example, it is assumed that the temporary defect candidate 30 is extracted from all the images A11 to A17.
- the coordinate estimation unit 223 calculates the representative coordinates that are the positions of the temporary defect candidates 30 for the temporary defect candidates 30 of each extracted image, and sets a predetermined area around the representative coordinates as the temporary defect candidate area. Further, the coordinates to which the calculated representative coordinates of the temporary defect candidates move with respect to each of the subsequent images A12 to A17 are calculated based on the movement amount of the work 1 and the estimated coordinates in each image are obtained. .. For example, it is calculated to which coordinate the temporary defect candidate 30 extracted in the image A11 moves with respect to each of the subsequent images A12 to A17, and the estimated coordinates in each image are obtained.
- the states in which the estimated coordinates 40 of the temporary defect candidate 30 are estimated in the subsequent images A12 to A17 of the image A11 are shown in the images B12 to B17 of FIG. 6B.
- the images B12 to B17 are the same as the images in which the temporary defect candidate 30 is removed in the images A12 to A17.
- some of the images in the middle are omitted.
- the light and dark patterns that appear in the image are omitted.
- the defect candidate determination unit 224 determines the subsequent images A12 to A17 of the image A11 shown in FIG. 6 (A) and the images B12 to B17 of FIG. 6 (B) for which the estimated coordinates 40 of the temporary defect candidate 30 are obtained. Among them, the corresponding images are matched with each other, such as the image A12 and the image B12, the image A13 and the image B13, ... The image A17 and the image B17. Matching determines whether the estimated coordinates 40 and the actual tentative defect candidate 30 in the image correspond. Specifically, it is performed by determining whether or not the estimated coordinates 40 are included in a predetermined temporary defect candidate region for the actual temporary defect candidate 30 in the image.
- the temporary defect candidate 30 exists within the predetermined range preset from the estimated coordinates 40, or the corresponding image is within the predetermined range preset from the representative coordinates of the temporary defect candidate 30.
- the estimated coordinates 40 it may be determined whether or not the estimated coordinates 40 and the actual temporary defect candidate 30 in the image correspond to each other.
- the temporary defect candidate 30 included in the original image A11 and the temporary defect candidate 30 included in the subsequent image can be regarded as the same.
- the number of images corresponding (matching) between the estimated coordinates 40 and the actual temporary defect candidate 30 in the image is examined, and it is determined whether or not the number is equal to or greater than a preset threshold value. If it is equal to or greater than the threshold value, it is highly probable that the temporary defect candidate 30 actually exists, so that the temporary defect candidate 30 of each image is determined as the defect candidate.
- a preset threshold value it is highly probable that the temporary defect candidate 30 actually exists, so that the temporary defect candidate 30 of each image is determined as the defect candidate.
- all of the subsequent images A12 to A17 of the image A11 are matched. That is, the estimated coordinates 40 are included in the temporary defect candidate region for the temporary defect candidate 30 in the image.
- the temporary defect candidate 30 is not likely to be a defect candidate, so matching is performed. Stop and extract the next temporary defect candidate 30.
- the image group creation unit 225 sets the representative coordinates of the defect candidates as shown by surrounding the images A11 to A17 of FIG. 6A with a square frame for all the images including the defect candidates.
- a predetermined region around the region is cut out as an estimation region, and as shown in FIG. 6C, an estimation region image group composed of a plurality of estimation region images C11 to C17 is created.
- the estimation area may be performed by first obtaining the estimation area of the original image A11 and calculating the position of the estimation area in each image from the movement amount of the work 1.
- the image synthesizing unit 226 superimposes and synthesizes the estimated region images C11 to C17 of the estimated region image group thus created to create one composite image 51 shown in FIG. 6 (C).
- the superposition is performed at the center coordinates of the estimated region images C11 to C17.
- the composite image 51 at least one of an image synthesized by calculating a statistical variation value such as a standard deviation image, a phase image, a phase difference image, a maximum value image, a minimum value image, and an average value image is used. Can be mentioned. An image synthesized by calculating statistical variation values such as a standard deviation image will be described later.
- the defect detection unit 227 detects surface defects using the created composite image 51.
- the detection criteria for surface defects may be freely selected. For example, as shown in the signal graph 61 of FIG. 6C, when the signal is equal to or higher than the reference value, it may be determined that there is a defect, and only the presence or absence of the defect may be detected. Alternatively, the type of defect may be determined by comparison with a reference defect or the like. The presence / absence of defects and the criteria for determining the type of defects may be changed by machine learning or the like, or new criteria may be created.
- the detection result of surface defects is displayed on the display 25.
- the plurality of estimated region images C11 to C17 cut out from the plurality of images A11 to A17 including the defect candidates are combined into one composite image 51, and based on the composite image 51. Since the defect is detected, the composite image 51 includes information on a plurality of images. Therefore, since defect detection can be performed using a large amount of information for one defect candidate, even a small surface defect can be detected with high accuracy and stability while suppressing over-detection and erroneous detection. it can. Further, when the image in which the estimated coordinates 40 and the actual temporary defect candidate 30 in the image correspond to each other exists at or above a preset threshold value, a composite image is created and the defect is detected, so that the defect may exist. Defect detection can be performed when the performance is high, the processing load is small, the detection efficiency is improved, and the detection accuracy is also improved.
- Modification example 1 when creating a composite image By the way, it may not be possible to obtain accuracy simply by superimposing and synthesizing a plurality of estimated region images C11 to C17 at the center coordinates of each image.
- each estimated area image C11 to C17 it is desirable to correct the center coordinates of each estimated area image C11 to C17 and superimpose them.
- An example of correction of the center coordinates is performed based on the relative position in the light / dark pattern in each image. Specifically, when a defect exists in the bright band portion of the bright / dark pattern or in the center of the dark band portion, the shape tends to be symmetrical, but as shown in FIG. 7, for example, the estimated image C14, the bright band is bright. When the band portion 120 is close to the boundary portion with the dark band portion 110, the boundary portion side of the defect candidate 30 becomes dark. On the contrary, when the dark zone portion 110 is close to the boundary portion, the boundary portion side becomes bright. Therefore, for example, when the center of gravity position calculation is performed, the defect candidate 30 is deviated from the center position 30a. Since the biased position correlates with the position from the boundary, the center coordinates of the image are corrected according to the position L from the boundary.
- FIG. 6D is a diagram showing a state in which the estimated region images C11 to C17 whose center position is corrected are superimposed and combined at the center position to create a composite image 52.
- a sharper composite image 52 is obtained, and the signal height in the signal graph 62 is also higher. Therefore, the composite image 52 with high accuracy can be created, and the surface defect can be detected with high accuracy.
- Modification example 2 when creating a composite image Another composite image creation method will be described with reference to FIG. 8 when it is not possible to obtain accuracy simply by superimposing and synthesizing a plurality of estimated region images C11 to C17 at the center coordinates of each image.
- the process up to the creation of the estimated region images C11 to C17 in FIG. 6 (C) is the same.
- the alignment of the estimated region images C11 to C17 is a plurality of combinations in which the center coordinates of each image are shifted in at least one of the left-right direction (x direction) and the vertical direction (y direction) by various alignment amounts. Try with. Then, the combination with the maximum evaluation value is adopted from among them.
- four types (A) to (D) are superimposed.
- the obtained composite images are shown in 53 to 56, respectively, and the signal graphs based on the composite images are shown in 63 to 66.
- (B) in which the highest signal is obtained is adopted.
- the evaluation value of the alignment of the plurality of estimated region images C11 to C17 at the time of creating the composite image is obtained from a plurality of combinations in which the center coordinates of each image are shifted to at least one of the X coordinate and the Y coordinate directions. Since it is performed so as to be maximized, it is possible to create a composite image with higher accuracy, and it is possible to detect surface defects with higher accuracy.
- Example of Temporary Defect Candidate Extraction Process An example of a temporary defect candidate extraction process having a large size and a gentle change in curvature by the temporary defect candidate extraction unit 222 will be described.
- the illumination light is reflected by the surface of the work 1 and is incident on each pixel of the camera 8.
- the light incident on each pixel is the light from the region reached by the line of sight emitted from each pixel reflected by the surface of the work 1 within the range in which each pixel is glaring.
- a dark pixel signal can be obtained without illumination, and a bright pixel signal can be obtained with illumination. If there are no defects and the work 1 is flat, the illuminated area corresponding to each pixel is close to a point.
- there is a defect there are two types of surface changes of the work 1: (1) curvature change and (2) surface inclination. (1) As shown in FIG.
- the difference in the inclination of the surface of the defect-free portion and the defect portion can be detected in the phase image. If it is a defect-free portion, in the phase image, the phase in the direction parallel to the fringes is the same, and a constant phase change occurs in the direction perpendicular to the fringes according to the period of the fringes. If it is a defective portion, the regularity of the phase is disturbed in the phase image. For example, by looking at the phase images in the X and Y directions, a temporary defect candidate having a gentle curved surface change can be detected.
- Any temporary defect candidate can be extracted with two types of routines, one for small temporary defect candidates and the other for large temporary defect candidate defects.
- the candidate extracted by either of them may be used as a temporary defect candidate.
- Thread lumps are defects in which thread-like foreign matter is trapped under the paint, and are not circular but elongated. Some are small in the line width direction (for example, less than 0.2 mm) but long in the longitudinal direction (for example, 5 mm or more). Since it is very narrow in the width direction, it is small and has a gradual change in curvature in the longitudinal direction. It may be overlooked only by the detection method for small defects and large defects (defects with a gentle slope) similar to the extraction of temporary defect candidates. After a predetermined treatment, it is binarized and granulated, and it is judged whether or not it is a defect based on the area of each part.
- the thread lumps are narrow but long, a predetermined area can be obtained if they are detected appropriately.
- the thread lumps are easily detected when the longitudinal direction is parallel to the direction in which the light-dark pattern extends, and are difficult to find when the vertical direction is vertical. Defects occur in the longitudinal direction and are shorter than they actually are, that is, the particleized area tends to be smaller.
- FIG. 10 is a flowchart showing the contents of the surface defect detection process executed by the defect detection PC 22.
- This surface defect detection process shows in more detail the contents of the pre-stage process of step S06 and the post-stage process of step S08 in FIG. Further, this surface defect detection process is executed by operating the processor in the defect detection PC 22 according to an operation program stored in an internal storage device such as a hard disk device.
- step S11 the master PC 21 acquires the individual information acquired in step S02 of FIG. 5 and the initial information such as the setting of the parameters set in step S03 and the setting of the inspection range on the vehicle body from the master PC 21.
- step S12 after acquiring the image captured by the camera 8, preprocessing, for example, setting the position information for the image based on the initial setting information and the like is performed in step S13.
- step S14 the temporary defect candidate 30 is extracted from each image, and then in step S15, the movement amount of the work 1 is calculated for one temporary defect candidate 30, and in step S16, the temporary defect candidate 30 is calculated.
- the coordinates in the subsequent image are estimated to be the estimated coordinates 40.
- step S17 Matching is performed in step S17. That is, when it is determined whether or not the estimated coordinates 40 exist in a predetermined temporary defect candidate region for the actual temporary defect candidate 30 in the image, and the number of matched images is equal to or larger than a preset threshold value, In step S18, the temporary defect candidate 30 of each image is determined as a defect candidate.
- step S19 for all the images having defect candidates, a predetermined area around the representative coordinates in the defect candidate is cut out as an estimation area, and an estimation area image group composed of a plurality of estimation area images C11 to C17 is created, and then the step is performed. Proceed to S20. Steps S12 to S19 are pre-stage processing.
- step S20 it is determined whether or not the vehicle body, which is the work 1, has exited the inspection range based on the information from the master PC 21. If the inspection range is not exited (NO in step S20), the process returns to step S12 and the acquisition of the image from the camera 8 is continued. If the vehicle body has exited the inspection range (YES in step S20), the alignment amount of each estimated area image C11 to C17 is set in step S21. Then, in step S22, the estimated region images C11 to C17 are combined to create a composite image, and then the defect detection process is performed in step S23. Steps S21 to S23 are post-stage processing. After the defect is detected, the detection result is output to the display 25 or the like in step S24.
- step S17 The matching process in step S17 will be described in detail with reference to the flowchart of FIG.
- step S201 K, which is a variable of the number of images matching the temporary defect candidate 30, is set to zero, and in step S202, the number of images to be determined whether or not the image matches the temporary defect candidate 30 is set. Set the variable N to zero.
- step S203 after extracting the temporary defect candidate 30, N + 1 is set in N in step S204.
- step S205 it is determined whether or not the provisional defect candidate 30 and the estimated coordinates 40 match. If they match (YES in step S205), K + 1 is set in K in step S206, and then the process proceeds to step S207. If the provisional defect candidate 30 and the estimated coordinates 40 do not match in step S205 (NO in step S205), the process proceeds to step S207.
- step S207 it is checked whether or not N has reached a predetermined number of images (7 in this case), and if not (NO in step S207), the process returns to step S203 and the temporary defect candidate 30 is extracted for the next image.
- N reaches a predetermined number of images (YES in step S207)
- step S208 it is determined in step S208 whether or not K is equal to or more than a preset predetermined threshold value (here, 5 images). If it is not equal to or greater than the threshold value (NO in step S208), the process returns to step S201. Therefore, in this case, the subsequent cropping process and image composition process of the estimated region image are not performed, N and K are reset, and the next temporary defect candidate 30 is extracted.
- a preset predetermined threshold value here, 5 images
- step S208 If K is equal to or greater than the threshold value (YES in step S208), the provisional defect candidate 30 is determined as a defect candidate in step S209, the information is saved, and then in step S210, the matched K images are estimated. Cut out the area image. Then, in step S211, after synthesizing the K cut-out estimated region images, it is determined in step S212 whether or not a surface defect is detected. If a surface defect is detected (YES in step S212), the surface defect is determined in step S213, the information is saved, and then the process proceeds to step S214. If no surface defect is detected (NO in step S212), the process proceeds to step S214 as it is.
- step S214 it is checked whether or not the detection processing has been performed on all the inspection target parts of the work, and if it has not been performed (NO in step S214), the process returns to step S201, N and K are reset, and the next provisional provisional Defect candidate 30 is extracted. If the detection process has been performed for all the inspection target sites (YES in step S214), the process ends.
- the processing load is small, the detection efficiency is improved, and the detection accuracy is also improved, as compared with the case where the estimation area image is cut out, the image composition is performed, and the defect detection is performed regardless of the number of matched images. ..
- FIG. 12 is a flowchart for explaining a modified example of the matching process in step S17 of FIG.
- the number of matched images K does not reach a certain value before the number of images N reaches a predetermined number, it is determined that the provisional defect candidate 30 is not likely to be a defect candidate, and at that time. It is configured to stop the subsequent processing.
- step S221 K, which is a variable of the number of images matching the temporary defect candidate 30, is set to zero, and in step S222, the number of images to be determined whether or not the image matches the temporary defect candidate 30 is set. Set the variable N to zero.
- step S223 after extracting the temporary defect candidate 30, N + 1 is set in N in step S224.
- step S225 it is determined whether or not the provisional defect candidate 30 and the estimated coordinates 40 match. If they match (YES in step S225), K + 1 is set in K in step S226, and then the process proceeds to step S227. If the provisional defect candidate 30 and the estimated coordinates 40 do not match in step S225 (NO in step S225), the process proceeds to step S227.
- step S227 it is checked whether N has reached the second predetermined number of images (8 in this case), and if it has reached (YES in step S227), K is preset in step S228. It is checked whether or not the threshold value (4 sheets in this case) of is reached, and if it is not reached (NO in step S228), the process returns to step S221. Therefore, in this case, the subsequent cropping process and image composition process of the estimated region image are not performed, N and K are reset, and the next temporary defect candidate 30 is extracted.
- step S228 If K has reached the second threshold value in step S228 (YES in step S228), the process proceeds to step S229. Even if N has not reached the second predetermined number of images (8 images) in step S227 (NO in step S227), the process proceeds to step S229.
- step S229 it is checked whether or not N has reached the first predetermined number (9 in this case), and if it has not reached (NO in step S229), the process returns to step S223 and the temporary defect candidate 30 is selected for the next image. Extract.
- N reaches the first predetermined number of sheets (YES in step S229)
- step S230 it is determined whether or not K is equal to or more than the preset first threshold value (here, 5 sheets). If it is not equal to or greater than the first threshold value (NO in step S230), the process returns to step S201. Therefore, in this case, the subsequent cropping process and image composition process of the estimated region image are not performed, N and K are reset, and the next temporary defect candidate 30 is extracted.
- step S230 If K is equal to or greater than the first threshold value (YES in step S230), the provisional defect candidate 30 is determined as a defect candidate in step S231, the information is saved, and then in step S232, the matching K images are used. Cut out the estimated area image for the image. Then, in step S233, after synthesizing the K cut-out estimated region images, it is determined in step S234 whether or not a surface defect is detected. If a surface defect is detected (YES in step S234), the surface defect is determined in step S235, the information is saved, and then the process proceeds to step S236. If no surface defect is detected (NO in step S234), the process proceeds to step S236 as it is.
- step S236 it is checked whether or not the detection processing has been performed on all the inspection target parts of the work, and if it has not been performed (NO in step S236), the process returns to step S201, N and K are reset, and the next provisional provisional Defect candidate 30 is extracted. If the detection process has been performed for all the inspection target sites (YES in step S236), the process ends.
- the temporary defect candidate 30 and the estimated coordinates 40 correspond (match) at the stage of the first set value, that is, in the middle of the stage, where the number N of the images from which the temporary defect candidate 30 is extracted is smaller than the second set value. If the number of images K to be used does not reach the first threshold value, which is smaller than the second threshold value, it is determined that there are few matching images and the provisional defect candidate 30 is not likely to be a defect candidate, and the final image is reached. The subsequent processing is stopped without continuing the matching processing. Therefore, since unnecessary processing is not continued, the processing load can be further reduced and the detection accuracy can be further improved.
- FIG. 13 is a flowchart showing details of steps S12 to S18 of the flowchart of FIG. 10, which is a pre-stage process in the surface defect detection process, and the same step numbers are assigned to the same processes as those of the flowchart of FIG.
- the camera 8 continuously takes an image while moving the work 1, and the defect detection PC 22 takes the final image from the first image taken in step S12. Get up to the image of the times.
- the image in which one temporary defect candidate 30 is captured is the image of the nth imaging (n + m-1) from the image of the nth imaging.
- step S13 each image is preprocessed, and in step S14, temporary defect candidates 30 are extracted for each image from the nth imaging to the (n + m-1) imaging (n + m-1) times, and the extracted temporary defect candidates 30 are represented. Find the coordinates and the temporary defect candidate area.
- step S16 the coordinates to which the representative coordinates of the temporary defect candidates move with respect to each of the subsequent images are calculated based on the calculation of the movement amount of the work 1 in step S15, and in each image.
- the estimated coordinates 40 are obtained.
- step S17 matching is performed for each subsequent image, and if the number of matched images is equal to or greater than a threshold value (for example, m), the provisional defect candidate 30 is determined as a defect candidate in step S18.
- step S19 an estimated region is calculated for each image, and an estimated region image group including a plurality of estimated region images C11 to C17 is created.
- the defect detection PC 22 extracts a temporary defect candidate 30 from images continuously acquired from the camera 8 in time series. And said.
- the extraction method of the temporary defect candidate 30 is not limited, but the configuration in which the temporary defect candidate 30 is extracted by performing the following processing is such that the defect portion is emphasized and the temporary defect candidate 30 is extracted with higher accuracy. It is desirable in that it can be performed.
- the feature points of the images are determined by applying the threshold value after performing the binarization process on each of the images A11 to A17 (shown in FIG. 6) acquired from the camera 8 or by applying the corner detection function. Extract. Then, the temporary defect candidate 30 may be extracted by obtaining a multidimensional feature amount for each of the extracted feature points.
- each image acquired from the camera 8 is binarized, the contour is extracted, and then the images that have been expanded and contracted a predetermined number of times are subtracted to obtain a bright band and a dark band.
- the extraction of the temporary defect candidate 30 is based on the brightness gradient information from the pixels in all directions in the vertical, horizontal, and diagonal directions for all the pixels in the surrounding specific range for each of the extracted feature points after the feature points of the image are extracted. It may be performed by obtaining a multidimensional feature amount.
- an estimated region image group consisting of a plurality of estimated region images C11 to C17 is created in the same manner as in the first surface defect detection process described above, and then the estimated region image group is used. , Defect detection is performed for each temporary defect candidate.
- the feature points of the images are extracted from the plurality of images in which the positions of the measured portions of the work 1 acquired from the camera 8 are continuously displaced, and each of the extracted images is extracted. Since the temporary defect candidate 30 is extracted by obtaining the multidimensional feature amount for the feature point, the temporary defect candidate 30 can be extracted with high accuracy, and the surface defect can be detected with high accuracy. ..
- the coordinates of the extracted temporary defect candidate 30 are obtained, and the coordinates of the temporary defect candidate 30 move to which coordinate for each of the plurality of images following the extracted image.
- Estimated coordinates 40 are calculated, and it is determined whether or not the estimated coordinates 40 and the temporary defect candidate 30 in the image correspond to each other.
- the provisional defect candidate 30 is determined as a defect candidate.
- a predetermined region around the defect candidate is cut out as an estimation region from the plurality of images including the defect candidate, and an estimation region composed of the plurality of estimation region images C11 to C17.
- An image group is created, and defect discrimination is performed based on the created estimated area image group.
- FIG. 14 is a flowchart showing a second surface defect detection process executed by the defect detection PC. Since steps S11 to S13 and steps S15 to S20 are the same as steps S11 to S13 and steps S15 to S20 in FIG. 10, the same step numbers are assigned and the description thereof will be omitted.
- a yuzu skin mask is created in step S141, and the created yuzu skin mask is applied in step S142 to extract feature points.
- step S143 a multidimensional feature amount is calculated for each extracted feature point, and after extracting the temporary defect candidate 30 in step S144, the process proceeds to step S16.
- step S20 When the vehicle body which is the work 1 exits the inspection range in step S20 (YES in step S20), the defect discrimination process is executed using the created estimated area image group in step S23, and the discrimination result is displayed in step S24. ..
- FIG. 15 is a flowchart showing the details of steps S12 to S18 of the flowchart of FIG. 14, and the same step numbers are assigned to the same processes as those of the flowchart of FIG. Since steps S12, S13, and S15 to S19 are the same as the processes of steps S12, S13, and S15 to S19 in FIG. 13, the description thereof will be omitted.
- step S141 After the pretreatment in step S13, in step S141, each yuzu skin mask is created for each image.
- step S142 the created Yuzu skin mask is applied to each image to extract feature points of each image.
- step S143 a multidimensional feature amount is calculated for each feature point of each extracted image, and in step S144, provisional defect candidates are extracted for each image, and then the process proceeds to step S16.
- step S144 provisional defect candidates are extracted for each image, and then the process proceeds to step S16.
- the third surface defect detection process a plurality of continuous time-series images acquired from the camera 8 are divided into a plurality of regions, and a plurality of front and rear images are combined in the corresponding regions. Later, the defect is detected.
- the imaging range of the work 1 shown in the area of the previous image and the imaging range of the work 1 shown in the area of the subsequent image are not the same, and the amount of movement of the work 1 is not the same.
- the imaging position is different depending on the above, the position of the region of the rear image is shifted from the region of the previous image by the amount of the position shift according to the amount of movement of the work 1 and the composition is performed. Further, since the amount of misalignment between the area of the previous image and the corresponding area of the rear image differs depending on the position of the divided area, the amount of misalignment according to the amount of movement of the work 1 is different for each divided area.
- the plurality of images continuously captured by the camera 8 and continuously acquired by the defect detection PC 22 in time series are the same as the images acquired by the first surface defect detection process. is there.
- FIG. 16 shows a plurality of images A21 and A22 continuously acquired in chronological order. In this example, two images are shown, but the number of images is actually larger. In the images A21 and A22, the light and dark patterns appearing in the images are omitted. These images A21 and A22 are divided into a plurality of regions 1 to p in a direction orthogonal to the moving direction of the work (vertical direction in FIG. 16). Regions 1 to p have the same size at the same position (same coordinates) in the images A21 and A22.
- the imaging range corresponding to the images in each region 1 to p in, for example, the image A21 acquired from the camera 8 is the original region 1 as shown by an arrow in the subsequent next image A22.
- the position is shifted in the moving direction by the amount of movement of the work 1 with respect to each of ⁇ p. Therefore, by shifting the positions of the regions 1 to p in the image A22 by the position shift amount S according to the movement amount of the work, each region 1 to p in the image A21 and each region 1 to after the position shift of the image A22 are shifted.
- p is the same imaging range on the work 1.
- the original image A21 and each subsequent image are sequentially shifted by the position shift amount S by sequentially shifting the regions 1 to p of each subsequent image.
- the imaging range for each of the regions 1 to p can be matched.
- the amount of deviation from the original regions 1 to p differs for each region 1 to p.
- the amount of misalignment between the region corresponding to the straight portion and the region corresponding to the curved portion in the image is not the same. It also differs depending on the perspective with respect to the camera 8. Therefore, even if all the regions 1 to p are shifted by a uniform amount of positional shift, the same imaging range is not always obtained depending on the region.
- the position shift amount S is calculated and set for each of the areas 1 to p. Specifically, the average magnification information in the area is obtained for each area 1 to p from the camera information, the camera position information, the three-dimensional shape of the work, and the position information of the work. Then, the position shift amount S is calculated for each of the areas 1 to p from the obtained magnification information and the approximate movement speed assumed in advance, and is used as the position shift amount S for each of the areas 1 to p.
- the amount of movement on the image is related to the imaging magnification of the camera and the speed of the work.
- the imaging magnification of the camera depends on (1) the focal length of the lens and (2) the distance from the camera to each part of the workpiece to be imaged. Regarding (2), the amount of movement in the part near the camera on the image is larger than that in the part far from the camera. If the 3D shape of the work 1 and the installation position of the camera 8 and the position / orientation of the work 1 are known, it is possible to calculate where the point of interest in the image captured at a certain moment appears.
- the work 1 moves and the position changes, it is possible to calculate how many pixels the same point of interest moves on the image in two consecutive images. For example, considering a case where a sensor having a focal length of 35 mm and a pixel size of 5.5 ⁇ m moves a work by 1.7 mm between adjacent images, the distance to the work 1 (Zw) is as shown in the graph of FIG. However, the moving distance in the screen is 18 to 10 pixels because of 600 to 1100 mm.
- the distance difference may be ⁇ 5 cm. Areas are divided on the image so that the distance difference from the camera is within ⁇ 5 cm from each other. The average amount of misalignment between consecutive images is calculated from the approximate moving speed of the work 1 for each divided area. For each region 1 to p, three types of misalignment amounts can be set, including the misalignment amount and the misalignment amount of ⁇ 1 pixel. However, the amount of misalignment is not limited to three types, and the distance difference is not limited to ⁇ 5 cm.
- the set position shift amount S for each of the areas 1 to p is stored in the table of the storage unit in the defect detection PC 22 in association with the areas 1 to p, and the same position shift amount can be set for the imaging site, for example, the work 1. For the same shape part and the same type of work, it is set by calling the position shift amount from the table.
- a predetermined number of consecutive images are combined for each of a plurality of areas 1 to p.
- the images of each region are superimposed in a state where the positions of each region 1 to p are shifted by the set position shift amount S, and the superimposed image is calculated for each pixel of the corresponding coordinates.
- the composite image at least one of an image synthesized by calculating a statistical variation value such as a standard deviation image, a phase image, a phase difference image, a maximum value image, a minimum value image, and an average value image is given. be able to.
- preprocessing such as background removal and binarization is performed on the composite image, for example, the standard deviation image, defect candidates are extracted, and then, if necessary, operations different from the processing at the time of defect candidate extraction are performed.
- Surface defects are detected using a composite image.
- the surface defect detection standard may be freely selected, and only the presence or absence of the defect may be determined, or the type of the defect may be determined by comparison with the reference defect or the like.
- the criteria for determining the presence or absence of defects and the types of defects may be set according to the characteristics of the work and the defects, may be changed by machine learning or the like, or new criteria may be created.
- the detection result of surface defects is displayed on the display 25.
- the developed view of the work (vehicle body) is displayed on the display 25, and the position and type of the surface defect are displayed on the developed view so as to be known.
- a plurality of captured images A21 and A22 continuously acquired from the camera in time series are divided into a plurality of regions 1 to p, and a plurality of captured images A21 and A22 are divided into a plurality of regions 1 to p. Since the images are combined and the defect detection is performed based on the combined image, the composite image includes the information of a plurality of images. Therefore, since defect detection can be performed using a large amount of information for one defect candidate, even a small surface defect can be detected with high accuracy and stability while suppressing over-detection and erroneous detection. it can.
- the corresponding areas are shifted in order from the areas 1 to p of the previous image A21 with respect to the areas 1 to p of the rear image A22. Since the images of each other are combined, the area of the previous image and the corresponding area of the subsequent image are the same imaging range of the work 1, and it is possible to combine multiple images with the imaging range of the work 1 matched. Become. Further, since the position shift amount is set for each of the divided areas 1 to p, the error in the imaging range is minimized as compared with the case where a uniform position shift amount is applied to all the divided areas 1 to p. It can be suppressed to the limit. Therefore, it is possible to detect surface defects with higher accuracy.
- a position shift amount candidate is set under a plurality of conditions from a slow speed to a high speed, including the assumed movement speed. Then, each position shift amount candidate is applied to create a composite image, defect detection is performed as necessary, and the position shift amount S having the highest evaluation from the comparison is adopted.
- a plurality of position shift amount candidates are set for each region 1 to p under different conditions, and the position shift amount candidate with the highest evaluation is compared from the comparison when the images are combined with each position shift amount candidate.
- a suitable position shift amount S can be set for each of the regions 1 to p, and surface defects can be detected with higher accuracy. .. [2-2] Modification example 2 regarding the amount of position shift
- the position shift amount S for each region 1 to p may be set as follows. That is, as shown in the graph of FIG. 17, if the moving distance of the work 1 between adjacent images is known, the amount of misalignment on the image can be calculated. In the above example, the position shift amount is set based on the work movement speed assumed in advance.
- the appropriate amount of position shift for each frame when creating a composite image may be determined based on the actually measured work position. In this case, it is possible to save the trouble of selecting the optimum position shift amount from the plurality of position shift amounts.
- the method of measuring the work position is as follows.
- the same part of the work 1 or the support member that moves in the same manner as the work 1 is imaged by a position-dedicated camera in which a plurality of the support members are arranged in the moving direction of the work 1, and the position information of the work is obtained from the image.
- the work 1 has a characteristic hole, the hole or a mark installed on a table that holds and moves the work 1 is used as a target for measuring the position or speed of the work 1.
- a plurality of cameras different from the camera 8 are arranged in a row in the traveling direction of the work 1 so as to look at the side surface of the work from the side of the work 1.
- a plurality of lateral fields of view are connected, they are arranged so as to cover the entire length of the work 1.
- the magnification can be calculated from the distance from the camera to the work 1 and the focal length of the camera. Based on the magnification, the actual position can be obtained from the position on the image.
- the positional relationship of each camera is known, and the position of the work 1 can be obtained from the image information of each camera.
- an appropriate position shift amount can be obtained from the image of the camera 8 for defect extraction. For example, for each area virtually divided on the work 1 so that the distance difference on the work seen from the camera is ⁇ 5 cm, the average on the image between adjacent images according to the movement amount of the work 1. The amount of movement is determined, and a composite image is created as the amount of position shift at the time of superimposition.
- Deformation example 3 regarding the amount of position shift In the second modification, the position of the work was determined by using a plurality of cameras arranged side by side.
- the work 1 or the same part of the support member that moves in the same way as the work 1 is measured by a measurement system including a distance sensor, a speed sensor, a vibration sensor alone, or a combination thereof, and the work is measured. You may ask for location information.
- the target is a part of the work 1 or the same part of the support member that moves in the same manner as the work 1.
- a sensor that detects the work position passing through the reference point + a distance sensor or "a sensor that detects the passage of the reference point + a speed sensor + an imaging time interval of adjacent images” is used.
- the work position can be obtained directly.
- the work position when each image is captured can be obtained by multiplying the speed information from the speed sensor by the imaging interval.
- an appropriate position shift amount can be obtained from the image of the camera 8 for defect extraction. For example, for each area virtually divided on the work 1 so that the distance difference on the work seen from the camera is ⁇ 5 cm, the average on the image between adjacent images according to the movement amount of the work 1. The amount of movement is determined, and a composite image is created as the amount of position shift at the time of superimposition.
- FIG. 18 is a flowchart showing the contents of the third surface defect detection process executed by the defect detection PC 22.
- This surface defect detection process shows in more detail the contents of the pre-stage process of step S06 and the post-stage process of step S08 in FIG. Further, this surface defect detection process is executed by operating the processor in the defect detection PC 22 according to an operation program stored in an internal storage device such as a hard disk device.
- step S31 the master PC 21 acquires the individual information acquired in step S02 of FIG. 5 and the initial information such as the setting of the parameters set in step S03 and the setting of the inspection range on the vehicle body from the master PC 21.
- step S32 the images A21 and A22 captured by the camera 8 are acquired, and then in step S33, each image A21 and A22 is divided into a plurality of regions 1 to p.
- step S35 a plurality of position shift amount candidates are set for each of the divided regions 1 to p based on the position of the work 1 and the moving speed (step S34).
- step S36 a plurality of images whose positions are shifted by a plurality of position shift amount candidates for one region are combined to create a plurality of composite image candidates for each region, and then in step S37, the created positions are created.
- the position shift amount candidate with the highest evaluation is set as the position shift amount with respect to the regions 1 to p, and a plurality of images are recombined and composited for each region according to the position shift amount. Create an image.
- step S38 after performing preprocessing such as background removal and binarization on the composite image, defect candidates are extracted in step S39.
- preprocessing such as background removal and binarization
- step S40 after creating a large number of defect candidate image groups from which defect candidates have been extracted, the process proceeds to step S41. .. Steps S32 to S40 are pre-stage processing.
- step S41 it is determined whether or not the vehicle body has exited the inspection range based on the information from the master PC 21. If the inspection range is not exited (NO in step S41), the process returns to step S32 and the acquisition of the image from the camera 8 is continued. If the vehicle body has left the inspection range (YES in step S41), the defect detection process is performed on the defect candidate image group in step S42. Step S42 is a post-stage process. After the defect is detected, the detection result is output to the display 25 or the like in step S43.
- FIG. 19 is a flowchart showing details of steps S32 to S40 of the flowchart of FIG. 18, which is a pre-stage process in the surface defect detection process, and the same step numbers are assigned to the same processes as those of the flowchart of FIG.
- the camera 8 continuously takes an image while moving the work 1, and the defect detection PC 22 takes the final image from the first image taken in step S32. Get up to the image of the times.
- the case where the images from the nth image to the (n + m-1) image is used will be illustrated.
- each image is divided into, for example, p image regions of regions 1 to p.
- step S35 q position shift amount candidates are set for each p region, and in step S36, q position shift amount candidates are applied for each p image region to obtain q composite image candidates. create. That is, q composite images are created for each of the regions 1 to p.
- step S37-1 the composite image having the highest evaluation value is selected for each region from the region 1 to the region p, and both the position shift candidates corresponding to the selected composite image are used as the position shift amount for the image region. decide.
- step S37-2 a composite image is created by applying the determined position shift amount for each of the areas 1 to p.
- step S38 Subsequent pre-processing (step S38), defect candidate extraction processing (step S39), and defect candidate image group creation processing (step S40) are the same as those in FIG. 18, so description thereof will be omitted.
- step S40 Creation of standard deviation image, etc.
- the image is captured in time series by the camera 8.
- a plurality of images to be combined were created based on the plurality of overlapping images in the imaging range, and the plurality of images were combined into one image to obtain a composite image.
- an image synthesized by calculating statistical variation values such as a standard deviation image can be considered.
- Statistical variation values include at least one of variance, standard deviation, and full width at half maximum. Either can be calculated, but here, a case where the standard deviation is calculated and combined will be described.
- FIG. 17 is a flowchart showing a process of creating a standard deviation image. The processing shown in FIG. 20 and the subsequent flowcharts is executed by the defect detection CPU operating according to the operation program stored in the storage unit or the like.
- step S51 the original images (N images) to be combined are generated, and in step S52, the sum of squares of the brightness values (hereinafter, also referred to as pixel values) of the first original image is calculated for each pixel.
- step S53 the sum of the pixel values is calculated for each pixel. In the first sheet, both the sum of squares and the sum calculation are the results of only the first sheet.
- step S54 it is checked whether or not there is the next image, and if there is (YES in step S54), the process returns to step S52, and the pixel value of each pixel of the second image is squared and the corresponding image of the first image is obtained. Add to the squared value of the pixel value.
- step S53 each pixel value of the second sheet is added to each corresponding pixel value of the first sheet.
- Such processing is performed for N images in order, and the sum of squares of each pixel value and the sum of each pixel value are calculated for each corresponding pixel of the N images.
- step S54 When the above processing for N sheets is completed (NO in step S54), the average of the sums of the pixel values calculated in step S53 is calculated in step S55, and then the square of the average of the sums is calculated in step S56. ..
- step S57 after calculating the root mean square, which is the average value of the sum of squares of each pixel value calculated in step S52, in step S57, the variance is expressed as ⁇ (root mean square)-(square of average) ⁇ . Ask from. Then, in step S59, the standard deviation, which is the square root of the variance, is obtained.
- the standard deviation thus obtained is preferably normalized, and a composite image is created based on the result.
- the variance or the half width is used as the statistical variation value, the calculation may be performed in the same manner.
- Surface defect detection processing is performed based on the created composite image.
- the detection process may be performed in the same manner as the first surface defect detection process and the third surface defect detection process.
- FIG. 18 shows a graph of the illuminance for the work 1 of the lighting unit 6 that illuminates the light and dark patterns.
- the top 71 of the waveform shows a bright band and the bottom 72 shows a dark band.
- the rising and falling portions 73 of the waveform from the bright band to the dark band or from the dark band to the bright band are actually not vertical but inclined.
- the pixel value becomes an intermediate gradation, which affects the variation.
- such a halftone pixel value affects the variation and causes a decrease in defect detection accuracy. Therefore, it is desirable to exclude such intermediate gradation pixel values from the sampling candidates for the variation calculation, so that the variation is calculated only for the selected optimum sampling candidate.
- the variation is calculated by subtracting two intermediate gradation pixel values from the pixel values of a plurality of pixels.
- the number of images is an odd number, it is preferable to subtract one intermediate gradation pixel value from the pixel values of a plurality of pixels to calculate the variation.
- the statistical variation value is obtained only for the optimum sampling candidate. Is calculated, and the influence of pixels excluded from sampling candidates can be suppressed. Therefore, even when the number of images to be combined is small, defect detection can be performed with high accuracy. It can be created.
- FIG. 22 is a flowchart showing a process of creating a standard deviation image by excluding the pixel value of the intermediate gradation from the sampling candidates of the variation calculation and performing the variation calculation only for the selected optimum sampling candidate. is there.
- step S61 a plurality of (N) original images are generated, and then in step S62, sampling data which is a pixel value for N images is sorted in each pixel of each image, and one value (N) of the median value is sorted. Is odd) or binary (N is even) is removed.
- step S63 the standard deviation is calculated for each pixel with the values of N-1 (N is odd) or N-2 (N is even).
- the standard deviation thus obtained is preferably normalized, and a composite image is created based on the result.
- the variance or the half width is used as the statistical variation value, the calculation may be performed in the same manner.
- Another Embodiment 2 Concerning Standard Deviation Image
- imaging is performed a plurality of times (N times) for one cycle of the illumination pattern. N times may be a small number.
- N-1 sampling data N-1 sampling data for each pixel.
- Pixel value when the number of images is even, the standard deviation is calculated from N-2 sampling data. That is, in the case of an odd number, the standard deviation is calculated by the combination of N-1 selected from N pixel values for each pixel (NCN-1)), and in the case of an even number, N pixel values are calculated for each pixel. Calculate the standard deviation with the combination of N-2 selected from the (NCN-2)). Then, the maximum standard deviation is determined as the standard deviation for the pixel from the standard deviations obtained for each pixel (NCN-1)) or (NCN-2)) (maximum value processing).
- step S71 the original image (N images) to be combined is generated, the sum of squares of the pixel values is calculated for each pixel for the first original image in step S72, and then the pixel value for each pixel is calculated in step S73. Calculate the sum of. In the first sheet, both the sum of squares and the sum calculation are the results of only the first sheet.
- step S74 the square value of each pixel value of the first sheet is stored, and in step S75, each pixel value (original) of the first sheet is stored.
- step S76 it is checked whether or not there is the next image, and if there is (YES in step S76), the process returns to step S72, squares the pixel value of each pixel of the second image, and corresponds to each of the first images. Add to the squared value of the pixel value.
- step S73 each pixel value of the second sheet is added to each corresponding pixel value of the first sheet.
- step S74 the square value of each pixel value of the second sheet is stored, and in step S75, each pixel value (original) of the second sheet is stored.
- Such processing is performed for N images in order, and the sum of squares of each pixel value and the sum of each pixel value are calculated for each corresponding pixel of the N images. Further, the square value and the pixel value (original) of each image value of each of the N images are stored.
- the sum of squares for N-1 images is calculated for each pixel by subtracting it from the sum of squares of the pixel values of each pixel in all images.
- step S78 each pixel value of the first image is subtracted from the sum of the pixel values of all the images calculated in step S73 to calculate the sum of N-1 images.
- step S79 the average of the sums of N-1 sheets calculated in step S78 is calculated, and then in step S80, the square of the average of the sums is calculated.
- step S81 after calculating the root mean square, which is the average value of the sum of squares of N-1 sheets calculated in step S77, in step S82, the variance is ⁇ (root mean square)-(root mean square) ⁇ . It is calculated from the formula of. Then, in step S83, the standard deviation, which is the square root of the variance, is obtained.
- step S84 the maximization process is performed in step S84.
- this value is the maximum.
- I 2
- the sum of squares of each pixel value of the second image and the pixel value are subtracted
- the standard deviation is calculated in the same manner
- the maximization process is performed in step S85.
- the maximization process the standard deviation when the first sheet is subtracted and the standard deviation when the second sheet is subtracted are compared, and the larger standard deviation is adopted.
- the standard deviation obtained in this way is preferably normalized, and a composite image is created based on the result.
- the variance or the half width is used as the statistical variation value, the calculation may be performed in the same manner.
- a predetermined number of images are excluded from the calculation target in order from a plurality of images, and the statistical variation value of each pixel is calculated, so that the optimum sampling candidate can be easily selected. Moreover, since the maximum value of the calculated variation value is adopted as the variation value for the pixel, a composite image having a higher S / N ratio can be created.
- the present invention can be used to detect surface defects of workpieces such as vehicle bodies.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
(1)表面欠陥の検出対象であるワークに対し照明装置による明暗パターンを相対的に移動させた状態で、前記ワークの被測定部位についての複数の画像を取得する画像取得手段と、前記画像取得手段により取得された各画像について仮欠陥候補を抽出する仮欠陥候補抽出手段と、前記仮欠陥候補抽出手段により仮欠陥候補が抽出された複数の画像のうち、仮欠陥候補を含む画像の数が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する欠陥候補決定手段と、前記欠陥候補決定手段により決定された欠陥候補が含まれている複数の画像を合成して合成画像を作成する画像合成手段と、前記画像合成手段により作成された合成画像に基づいて欠陥検出を行う検出手段と、を備えたワークの表面欠陥検出装置。
(2)前記閾値を第1の閾値としたとき、前記欠陥候補決定手段は、仮欠陥候補を欠陥候補として決定する処理の途中において、仮欠陥候補を含む画像の数が前記第1の閾値よりも小さく予め設定された第2の閾値に達していないときは、仮欠陥候補を欠陥候補として決定する処理を中止する前項1に記載のワークの表面欠陥検出装置。
(3)前記欠陥候補決定手段は、前記複数の画像のうち、仮欠陥候補の位置が対応する画像が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する前項1または2に記載のワークの表面欠陥検出装置。
(4)前記仮欠陥候補の位置は仮欠陥候補の座標で表され、前記仮欠陥候補抽出手段により抽出された仮欠陥候補の座標を求めるとともに、仮欠陥候補が抽出された画像に後続する複数の画像のそれぞれに対して、前記仮欠陥候補の座標がどの座標に移動するかを演算して推定座標を求める座標推定手段を備え、前記欠陥候補決定手段は、前記後続する各画像において、前記座標推定手段により演算された推定座標とその画像における仮欠陥候補とが対応するかどうかを判定し、前記後続する画像のうち、推定座標と仮欠陥候補とが対応する画像が、予め設定された閾値以上存在すれば前記仮欠陥候補を欠陥候補として決定する前項3に記載のワークの表面欠陥検出装置。
(5)前記欠陥候補決定手段により決定された欠陥候補毎に、該欠陥候補が含まれている複数の画像の中から、欠陥候補の周りの所定領域を推定領域として切り出して、複数の推定領域画像群を作成する画像群作成手段を備え、前記画像合成手段は、欠陥候補毎に、前記画像群作成手段により作成された複数の推定領域画像を合成して合成画像を作成する前項1~4のいずれかに記載のワークの表面欠陥検出装置。
(6)前記合成画像は、標準偏差画像、位相画像、位相差画像、最大値画像、最小値画像、平均値画像の少なくともいずれかである前項1~5のいずれかに記載のワークの表面欠陥検出装置。
(7)前記合成手段は、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を重ね合わせることにより行う前項5に記載のワークの表面欠陥検出装置。
(8)前記合成手段は、各画像の中心座標を、各中心座標の前記照明装置の明暗パターンに対応する画像の明暗パターンの境界からの位置に基づいて補正し、補正後の中心座標を重ね合わせる前項7に記載のワークの表面欠陥検出装置。
(9)前記合成手段は、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を互いにX座標及びY座標方向にずらした複数の組み合わせの中から評価値が最大になるように行う前項5~8のいずれかに記載のワークの表面欠陥検出装置。
(10)前記仮欠陥候補抽出手段は、小さい欠陥を抽出するルーチンと、緩い凸欠陥を抽出する処理ルーチンの少なくとも2種類で仮欠陥候補を抽出し、緩い凸欠陥のルーチンで検出された仮欠陥候補については欠陥サイズの補正を行う前項1~9のいずれかに記載のワークの表面欠陥検出装置。
(11)前記検出手段は、欠陥候補の位相画像における2次元的な欠陥信号の2次元的な形状の真円度からのずれ量をもとに、糸ブツを判別する前項1~10のいずれかに記載のワークの表面欠陥検出装置。
(12)表面欠陥の検出対象であるワークに対して、明暗パターンの照明光を照射する照明装置と、前記照明装置による照明光の前記ワークの被測定部位からの反射光を撮像する撮像手段と、前記ワークに対し前記照明装置による明暗パターンを相対的に移動させる移動手段と、前記移動手段により、前記ワークに対して前記照明装置の明暗パターンを相対的に移動させながら、前記ワークの被測定部位を撮像するように、前記撮像手段を制御する制御手段と、前項1~11のいずれかに記載のワークの表面欠陥検出装置と、を備え、前記画像取得手段は、前記撮像手段から前記ワークの被測定部位についての複数の画像を取得するワークの表面検査システム。
(13)表面欠陥の検出対象であるワークに対し照明装置による明暗パターンを相対的に移動させた状態で、前記ワークの被測定部位についての複数の画像を取得する画像取得ステップと、前記画像取得ステップにより取得された各画像について仮欠陥候補を抽出する仮欠陥候補抽出ステップと、前記仮欠陥候補抽出ステップにより仮欠陥候補が抽出された複数の画像のうち、仮欠陥候補を含む画像の数が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する欠陥候補決定ステップと、前記欠陥候補決定ステップにより決定された欠陥候補が含まれている複数の画像を合成して合成画像を作成する画像合成ステップと、前記画像合成ステップにより作成された合成画像に基づいて欠陥検出を行う検出ステップと、をワークの表面欠陥検出装置が実行するワークの表面欠陥検出方法。
(14)前記閾値を第1の閾値としたとき、前記欠陥候補決定手段は、仮欠陥候補を欠陥候補として決定する処理の途中において、仮欠陥候補を含む画像の数が前記第1の閾値よりも小さく予め設定された第2の閾値に達していないときは、仮欠陥候補を欠陥候補として決定する処理を中止する前項13に記載のワークの表面欠陥検出方法。
(15)前記欠陥候補決定ステップでは、前記複数の画像のうち、仮欠陥候補の位置が対応する画像が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する前項13または14に記載のワークの表面欠陥検出方法。
(16)前記仮欠陥候補の位置は仮欠陥候補の座標で表され、前記仮欠陥候補抽出ステップにより抽出された仮欠陥候補の座標を求めるとともに、仮欠陥候補が抽出された画像に後続する複数の画像のそれぞれに対して、前記仮欠陥候補の座標がどの座標に移動するかを演算して推定座標を求める座標推定ステップを備え、前記欠陥候補決定ステップでは、前記後続する各画像において、前記座標推定ステップにより演算された推定座標とその画像における仮欠陥候補とが対応するかどうかを判定し、前記後続する画像のうち、推定座標と仮欠陥候補とが対応する画像が、予め設定された閾値以上存在すれば前記仮欠陥候補を欠陥候補として決定する前項15に記載のワークの表面欠陥検査方法。
(17)前記欠陥候補決定ステップにより決定された欠陥候補毎に、該欠陥候補が含まれている複数の画像の中から、欠陥候補の周りの所定領域を推定領域として切り出して、複数の推定領域画像群を作成する画像群作成ステップを備え、前記画像合成ステップでは、欠陥候補毎に、前記画像群作成ステップにより作成された複数の推定領域画像を合成して合成画像を作成する前項13~16のいずれかに記載のワークの表面欠陥検査方法。
(18)前記合成画像は、標準偏差画像、位相画像、位相差画像、最大値画像、最小値画像、平均値画像の少なくともいずれかである前項13~17のいずれかに記載のワークの表面欠陥検出方法。
(19)前記合成ステップでは、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を重ね合わせることにより行う前項17に記載のワークの表面欠陥検出方法。
(20)前記合成ステップでは、各画像の中心座標を、各中心座標の前記照明装置の明暗パターンに対応する画像の明暗パターンの境界からの位置に基づいて補正し、補正後の中心座標を重ね合わせる前項19に記載のワークの表面欠陥検出方法。
(21)前記合成ステップでは、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を互いにX座標及びY座標方向にずらした複数の組み合わせの中から評価値が最大になるように行う前項17~20のいずれかに記載のワークの表面欠陥検出方法。
(22)前記仮欠陥候補抽出ステップでは、小さい欠陥を抽出するルーチンと、緩い凸欠陥を抽出する処理ルーチンの少なくとも2種類で仮欠陥候補を抽出し、緩い凸欠陥のルーチンで検出された仮欠陥候補については欠陥サイズの補正を行う前項13~21のいずれかに記載のワークの表面欠陥検出方法。
(23)前記検出ステップでは、欠陥候補の位相画像における2次元的な欠陥信号の2次元的な形状の真円度からのずれ量をもとに、糸ブツを判別する前項13~22のいずれかに記載のワークの表面欠陥検出方法。
(24)前項13~23のいずれかに記載のワークの表面欠陥検出方法をコンピュータに実行させるためのプログラム。
[1]第1の表面欠陥検出処理
上述したように、欠陥検出PC22は、ワーク1の被測定部位の位置が移動方向に連続的にずれた複数の画像を、各カメラ8から取得する。この様子を図6に示す。同図(A)のA11~A17は1台のカメラ8から時系列で連続的に取得された画像である。画像中に表示された、縦方向に延びる明帯(白色部分)と暗帯(黒色部分)が横方向に交互に存在する明暗パターンは、照明ユニット6による照明光の明暗縞パターンに対応している。
次に、マッチングの結果、推定座標40とその画像における実際の仮欠陥候補30とが対応する(マッチングする)画像の数を調べ、その数が予め設定された閾値以上かどうかを判断する。そして、閾値以上の場合には、仮欠陥候補30が実際に存在する蓋然性が高いことから、各画像の仮欠陥候補30を欠陥候補として決定する。図6(A)及び(B)の例では、画像A11の後続画像A12~A17の全てについて、マッチングがとれている。つまり、推定座標40がその画像における仮欠陥候補30についての仮欠陥候補領域に含まれている。推定座標40と実際の仮欠陥候補30とが対応する画像の数が予め設定された閾値以上でない場合は、その仮欠陥候補30が欠陥候補である可能性は高くないと考えられるため、マッチングは中止し、次の仮欠陥候補30の抽出を行う。
また、推定座標40とその画像における実際の仮欠陥候補30とが対応する画像が予め設定された閾値以上存在する場合に、合成画像が作成され、欠陥検出が行われるから、欠陥が存在する可能性の高い場合に欠陥検出を行うことができ、処理負荷が小さくて済み検出効率が向上するうえ、検出精度も良くなる。
[1-1]合成画像作成時の変形例1
ところで、複数の推定領域画像C11~C17を、各画像の中心座標において重ね合わせて合成するだけでは、精度を出せない場合がある。
[1-2]合成画像作成時の変形例2
複数の推定領域画像C11~C17を、各画像の中心座標において重ね合わせて合成するだけでは精度を出せない場合の、他の合成画像作成法を図8に基づいて説明する。
[1-3]仮欠陥候補抽出処理の一例
仮欠陥候補抽出部222による、サイズが大きく曲率変化が緩い仮欠陥候補の抽出処理例について説明する。
(1)図9(A)に示すように、仮欠陥候補30によりワーク1の表面が曲率変化を持っていると、視線の方向が変わるが、さらに、各画素が睨む領域が広がる。その結果、各画素が対応する領域が点ではなく広がった領域となり、その領域内の平均的な輝度が画素信号に対応する。つまり、仮欠陥候補30の形状変化が急な場合は、各画素が睨む領域内で曲率変化が大きくなり、視線の傾きに加えて面積の広がりが無視できなくなる。睨む領域の拡大は信号の照明分布の平均化となる。明暗縞パターン照明(図9では白抜き部分が明、黒塗り部分が暗)で、領域が広がると、広がり方に応じた明暗両方の領域の平均的な値が得られる。この現象が生じる部分の明暗縞パターンが順次移動する場合、標準偏差画像でその影響が捉えられる。
(2)図9(B)に示すように、仮欠陥候補30によりワーク1の表面が曲率半径が大きく、ほぼ平面のまま傾くと、対応する領域は点のままだが、傾いていない面とは異なる方向を向く。仮欠陥候補30が大きい(形状変化が緩やかな)場合は、各画素が睨む領域は同じで視線方向が変わることが支配的となり、曲率変化は緩やかになる。標準偏差画像ではその変化は捉えられない。欠陥が大きな場合は、無欠陥部と欠陥部の面の傾きの違いが位相画像で検出できる。無欠陥部であれば、位相画像では、縞と平行方向の位相は同じで、縞と垂直な方向は縞の周期に応じて、一定の位相変化を生じる。欠陥部であれば、位相画像では、上記位相の規則性が乱れる。例えば、X方向、Y方向の位相画像を見ることで、緩やか曲面変化をもつ仮欠陥候補が検出できる。
[1-4]糸ブツ欠陥の検出
欠陥検出部227による欠陥検出の一例として、糸ブツの検出処理について説明する。
[1-5]フローチャート
図10は、欠陥検出PC22で実行される表面欠陥検出処理の内容を示すフローチャートである。この表面欠陥検出処理は、図5のステップS06の前段処理及びステップS08の後段処理の内容をさらに詳細に示したものである。また、この表面欠陥検出処理は、欠陥検出PC22内のプロセッサがハードディスク装置等の内蔵記憶装置に格納された動作プログラムに従って動作することにより実行される。
[2]第2の表面欠陥検出処理
上記の第1の表面欠陥検出処理では、欠陥検出PC22は、カメラ8から時系列で連続的に取得された画像の中から仮欠陥候補30を抽出するものとした。
[2-1]フローチャート
図14は、欠陥検出PCで実行される第2の表面欠陥検出処理を示すフローチャートである。なお、ステップS11~S13、ステップS15~S20は、図10のステップS11~S13、ステップS15~S20と同じであるので、同一のステップ番号を付し、説明は省略する。
[3]第3の表面欠陥検出処理
上述した第1の表面欠陥検出処理は、各画像A11~A17において仮欠陥候補30を抽出したのち欠陥候補を決定し、欠陥候補の周囲の推定領域を算出し、複数の推定領域画像C11~C17を合成して欠陥検出を行った。
[2-1]位置ずらし量に関する変形例1
上記の例では、分割した各領域1~pに応じた位置ずらし量Sを、各領域1~pの倍率情報と、事前に想定されるおおよその移動速度から、領域1~p毎に算出したが、各領域1~pに対し複数の位置ずらし量を設定した結果から、位置ずらし量Sを設定しても良い。
[2-2]位置ずらし量に関する変形例2
領域1~p毎の位置ずらし量Sの設定は次のようにして行われても良い。即ち、図17のグラフのように、隣り合う画像間のワーク1の移動距離がわかれば、画像上での位置ずれ量が計算できる。前述の例では、あらかじめ想定されるワーク移動速度に基づいて、位置ずらし量を設定した。
[2-3]位置ずらし量に関する変形例3
変形例2では、ワークの位置を、複数台並べたカメラを用いて求めた。その代わりに、ワーク1あるいはワーク1と同一の動きをする支持部材の同一の部位を、距離センサ、速度センサ、振動センサのいずれか単独、または、それらの組み合わせを含む計測システムで計測し、ワーク位置情報を求めてもよい。
[2-4]フローチャート
ワークの表面検査システムの全体の処理は、図5に示したフローチャートに従って実施される。
[4]標準偏差画像等の作成
第1の表面欠陥検出処理及び第3の表面欠陥検出処理では、明暗の照明パターンを照射した状態でワークを移動させたときに、カメラ8により時系列に撮像された撮像範囲の重複した複数枚の画像に基づいて、合成対象の複数枚の画像を作成し、これら複数枚の画像を1枚の画像に合成して合成画像とした。この合成画像の一つとして、標準偏差画像等のように統計的なばらつき値を計算して合成した画像が考えられる。
[4-1]標準偏差画像に関する他の実施形態1
明暗パターンの照明を行う照明ユニット6のワーク1に対する照度のグラフを図18に示す。図21のグラフでは、波形の頂部71が明帯を示し、底部72が暗帯を示している。
[4-2]標準偏差画像に関する他の実施形態2
この実施形態においても、照明パターンの1周期に対して複数回(N回)の撮像を行う。N回は少ない数でも良い。
2 移動機構
3 照明フレーム
4 支持台
6 照明ユニット
7 カメラフレーム
8 カメラ
21 マスターPC
22 欠陥検出PC
30 仮欠陥候補
40 推定座標
221 画像取得部
222 仮欠陥候補抽出部
223 座標推定部
224 欠陥候補決定部
225 画像群作成部
226 画像合成部
227 欠陥検出部
Claims (24)
- 表面欠陥の検出対象であるワークに対し照明装置による明暗パターンを相対的に移動させた状態で、前記ワークの被測定部位についての複数の画像を取得する画像取得手段と、
前記画像取得手段により取得された各画像について仮欠陥候補を抽出する仮欠陥候補抽出手段と、
前記仮欠陥候補抽出手段により仮欠陥候補が抽出された複数の画像のうち、仮欠陥候補を含む画像の数が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する欠陥候補決定手段と、
前記欠陥候補決定手段により決定された欠陥候補が含まれている複数の画像を合成して合成画像を作成する画像合成手段と、
前記画像合成手段により作成された合成画像に基づいて欠陥検出を行う検出手段と、
を備えたワークの表面欠陥検出装置。 - 前記閾値を第1の閾値としたとき、前記欠陥候補決定手段は、仮欠陥候補を欠陥候補として決定する処理の途中において、仮欠陥候補を含む画像の数が前記第1の閾値よりも小さく予め設定された第2の閾値に達していないときは、仮欠陥候補を欠陥候補として決定する処理を中止する請求項1に記載のワークの表面欠陥検出装置。
- 前記欠陥候補決定手段は、前記複数の画像のうち、仮欠陥候補の位置が対応する画像が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する請求項1または2に記載のワークの表面欠陥検出装置。
- 前記仮欠陥候補の位置は仮欠陥候補の座標で表され、
前記仮欠陥候補抽出手段により抽出された仮欠陥候補の座標を求めるとともに、仮欠陥候補が抽出された画像に後続する複数の画像のそれぞれに対して、前記仮欠陥候補の座標がどの座標に移動するかを演算して推定座標を求める座標推定手段を備え、
前記欠陥候補決定手段は、前記後続する各画像において、前記座標推定手段により演算された推定座標とその画像における仮欠陥候補とが対応するかどうかを判定し、前記後続する画像のうち、推定座標と仮欠陥候補とが対応する画像が、予め設定された閾値以上存在すれば前記仮欠陥候補を欠陥候補として決定する請求項3に記載のワークの表面欠陥検出装置。 - 前記欠陥候補決定手段により決定された欠陥候補毎に、該欠陥候補が含まれている複数の画像の中から、欠陥候補の周りの所定領域を推定領域として切り出して、複数の推定領域画像群を作成する画像群作成手段を備え、
前記画像合成手段は、欠陥候補毎に、前記画像群作成手段により作成された複数の推定領域画像を合成して合成画像を作成する請求項1~4のいずれかに記載のワークの表面欠陥検出装置。 - 前記合成画像は、標準偏差画像、位相画像、位相差画像、最大値画像、最小値画像、平均値画像の少なくともいずれかである請求項1~5のいずれかに記載のワークの表面欠陥検出装置。
- 前記合成手段は、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を重ね合わせることにより行う請求項5に記載のワークの表面欠陥検出装置。
- 前記合成手段は、各画像の中心座標を、各中心座標の前記照明装置の明暗パターンに対応する画像の明暗パターンの境界からの位置に基づいて補正し、補正後の中心座標を重ね合わせる請求項7に記載のワークの表面欠陥検出装置。
- 前記合成手段は、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を互いにX座標及びY座標方向にずらした複数の組み合わせの中から評価値が最大になるように行う請求項5~8のいずれかに記載のワークの表面欠陥検出装置。
- 前記仮欠陥候補抽出手段は、小さい欠陥を抽出するルーチンと、緩い凸欠陥を抽出する処理ルーチンの少なくとも2種類で仮欠陥候補を抽出し、緩い凸欠陥のルーチンで検出された仮欠陥候補については欠陥サイズの補正を行う請求項1~9のいずれかに記載のワークの表面欠陥検出装置。
- 前記検出手段は、欠陥候補の位相画像における2次元的な欠陥信号の2次元的な形状の真円度からのずれ量をもとに、糸ブツを判別する請求項1~10のいずれかに記載のワークの表面欠陥検出装置。
- 表面欠陥の検出対象であるワークに対して、明暗パターンの照明光を照射する照明装置と、
前記照明装置による照明光の前記ワークの被測定部位からの反射光を撮像する撮像手段と、
前記ワークに対し前記照明装置による明暗パターンを相対的に移動させる移動手段と、
前記移動手段により、前記ワークに対して前記照明装置の明暗パターンを相対的に移動させながら、前記ワークの被測定部位を撮像するように、前記撮像手段を制御する制御手段と、
請求項1~11のいずれかに記載のワークの表面欠陥検出装置と、
を備え、
前記画像取得手段は、前記撮像手段から前記ワークの被測定部位についての複数の画像を取得するワークの表面検査システム。 - 表面欠陥の検出対象であるワークに対し照明装置による明暗パターンを相対的に移動させた状態で、前記ワークの被測定部位についての複数の画像を取得する画像取得ステップと、
前記画像取得ステップにより取得された各画像について仮欠陥候補を抽出する仮欠陥候補抽出ステップと、
前記仮欠陥候補抽出ステップにより仮欠陥候補が抽出された複数の画像のうち、仮欠陥候補を含む画像の数が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する欠陥候補決定ステップと、
前記欠陥候補決定ステップにより決定された欠陥候補が含まれている複数の画像を合成して合成画像を作成する画像合成ステップと、
前記画像合成ステップにより作成された合成画像に基づいて欠陥検出を行う検出ステップと、
をワークの表面欠陥検出装置が実行するワークの表面欠陥検出方法。 - 前記閾値を第1の閾値としたとき、前記欠陥候補決定手段は、仮欠陥候補を欠陥候補として決定する処理の途中において、仮欠陥候補を含む画像の数が前記第1の閾値よりも小さく予め設定された第2の閾値に達していないときは、仮欠陥候補を欠陥候補として決定する処理を中止する請求項13に記載のワークの表面欠陥検出方法。
- 前記欠陥候補決定ステップでは、前記複数の画像のうち、仮欠陥候補の位置が対応する画像が予め設定された閾値以上存在すれば、前記仮欠陥候補を欠陥候補として決定する請求項13または14に記載のワークの表面欠陥検出方法。
- 前記仮欠陥候補の位置は仮欠陥候補の座標で表され、
前記仮欠陥候補抽出ステップにより抽出された仮欠陥候補の座標を求めるとともに、仮欠陥候補が抽出された画像に後続する複数の画像のそれぞれに対して、前記仮欠陥候補の座標がどの座標に移動するかを演算して推定座標を求める座標推定ステップを備え、
前記欠陥候補決定ステップでは、前記後続する各画像において、前記座標推定ステップにより演算された推定座標とその画像における仮欠陥候補とが対応するかどうかを判定し、前記後続する画像のうち、推定座標と仮欠陥候補とが対応する画像が、予め設定された閾値以上存在すれば前記仮欠陥候補を欠陥候補として決定する請求項15に記載のワークの表面欠陥検査方法。 - 前記欠陥候補決定ステップにより決定された欠陥候補毎に、該欠陥候補が含まれている複数の画像の中から、欠陥候補の周りの所定領域を推定領域として切り出して、複数の推定領域画像群を作成する画像群作成ステップを備え、
前記画像合成ステップでは、欠陥候補毎に、前記画像群作成ステップにより作成された複数の推定領域画像を合成して合成画像を作成する請求項13~16のいずれかに記載のワークの表面欠陥検査方法。 - 前記合成画像は、標準偏差画像、位相画像、位相差画像、最大値画像、最小値画像、平均値画像の少なくともいずれかである請求項13~17のいずれかに記載のワークの表面欠陥検出方法。
- 前記合成ステップでは、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を重ね合わせることにより行う請求項17に記載のワークの表面欠陥検出方法。
- 前記合成ステップでは、各画像の中心座標を、各中心座標の前記照明装置の明暗パターンに対応する画像の明暗パターンの境界からの位置に基づいて補正し、補正後の中心座標を重ね合わせる請求項19に記載のワークの表面欠陥検出方法。
- 前記合成ステップでは、合成画像作成時の複数の推定領域画像の位置合わせを、各画像の中心座標を互いにX座標及びY座標方向にずらした複数の組み合わせの中から評価値が最大になるように行う請求項17~20のいずれかに記載のワークの表面欠陥検出方法。
- 前記仮欠陥候補抽出ステップでは、小さい欠陥を抽出するルーチンと、緩い凸欠陥を抽出する処理ルーチンの少なくとも2種類で仮欠陥候補を抽出し、緩い凸欠陥のルーチンで検出された仮欠陥候補については欠陥サイズの補正を行う請求項13~21のいずれかに記載のワークの表面欠陥検出方法。
- 前記検出ステップでは、欠陥候補の位相画像における2次元的な欠陥信号の2次元的な形状の真円度からのずれ量をもとに、糸ブツを判別する請求項13~22のいずれかに記載のワークの表面欠陥検出方法。
- 請求項13~23のいずれかに記載のワークの表面欠陥検出方法をコンピュータに実行させるためのプログラム。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/639,396 US12236576B2 (en) | 2019-10-02 | 2019-10-02 | Workpiece surface defect detection device and detection method, workpiece surface inspection system, and program |
| JP2020510134A JP6756417B1 (ja) | 2019-10-02 | 2019-10-02 | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム |
| CN201980100819.6A CN114450711B (zh) | 2019-10-02 | 2019-10-02 | 工件的表面缺陷检测装置及检测方法、工件的表面检查系统以及程序 |
| PCT/JP2019/038908 WO2021064893A1 (ja) | 2019-10-02 | 2019-10-02 | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2019/038908 WO2021064893A1 (ja) | 2019-10-02 | 2019-10-02 | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021064893A1 true WO2021064893A1 (ja) | 2021-04-08 |
Family
ID=72432416
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/038908 Ceased WO2021064893A1 (ja) | 2019-10-02 | 2019-10-02 | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US12236576B2 (ja) |
| JP (1) | JP6756417B1 (ja) |
| CN (1) | CN114450711B (ja) |
| WO (1) | WO2021064893A1 (ja) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112179903A (zh) * | 2020-09-30 | 2021-01-05 | 深兰人工智能芯片研究院(江苏)有限公司 | 无锁空瓶检测方法和系统 |
| US20230028335A1 (en) * | 2021-07-21 | 2023-01-26 | Toyota Jidosha Kabushiki Kaisha | Abnormality inspection system, abnormality inspection method and program |
| JP2023039096A (ja) * | 2021-09-08 | 2023-03-20 | トヨタ自動車株式会社 | 検査装置、検査方法およびプログラム |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220292665A1 (en) * | 2019-10-02 | 2022-09-15 | Konica Minolta, Inc. | Workpiece surface defect detection device and detection method, workpiece surface inspection system, and program |
| CN114511443B (zh) * | 2020-10-29 | 2025-04-15 | 北京中祥英科技有限公司 | 图像处理、图像识别网络训练和图像识别的方法及装置 |
| DE102020213828B4 (de) * | 2020-11-03 | 2022-12-01 | Volkswagen Aktiengesellschaft | Inspektionseinrichtung und Verfahren zum Überprüfen eines mittels eines Sinterverfahrens gefertigten Gegenstands auf mögliche Fehler |
| IT202100021143A1 (it) * | 2021-08-04 | 2023-02-04 | Microtec Srl | Metodo e dispositivo per determinare un orientamento angolare di un tronco |
| US20230080612A1 (en) * | 2021-09-10 | 2023-03-16 | Laon People Inc. | Surface defect detection apparatus and method |
| CN118172308A (zh) * | 2024-02-20 | 2024-06-11 | 西安电子科技大学 | 融合注意力机制与可变形卷积的轮毂表面缺陷检测方法、装置、电子设备及存储介质 |
| CN118587187B (zh) * | 2024-06-07 | 2025-03-11 | 东莞市锦凌电子有限公司 | 一种汽车连接器视觉缺陷检测方法 |
| CN119596841B (zh) * | 2024-11-04 | 2025-07-08 | 广州逗派智能科技有限公司 | 一种基于数据分析的加工件自动检测系统 |
| CN119090884B (zh) * | 2024-11-07 | 2025-01-28 | 深圳市华科科悦智能研究中心有限公司 | 一种纸杯检测方法、装置及系统 |
| CN120339261B (zh) * | 2025-04-28 | 2025-10-28 | 徐州大泰机电科技有限公司 | 基于计算机视觉的车架质量检测方法及系统 |
| CN121053135B (zh) * | 2025-11-04 | 2026-01-06 | 浙江高强度紧固件有限公司 | 一种镍基合金螺栓滚压缺陷监测方法及系统 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11118731A (ja) * | 1997-10-20 | 1999-04-30 | Nissan Motor Co Ltd | 被検査面の欠陥検査方法およびその装置 |
| JP2010151824A (ja) * | 2005-01-14 | 2010-07-08 | Hitachi High-Technologies Corp | パターン検査方法及びその装置 |
Family Cites Families (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3371526B2 (ja) | 1994-03-31 | 2003-01-27 | 日産自動車株式会社 | 塗装面状態検出装置および塗装面状態検出方法 |
| JP3127758B2 (ja) * | 1994-09-19 | 2001-01-29 | 日産自動車株式会社 | 被検査面の欠陥検査方法およびその装置 |
| JP3156602B2 (ja) | 1996-10-11 | 2001-04-16 | 日産自動車株式会社 | 被検査面の欠陥検査方法 |
| JP3271549B2 (ja) | 1997-05-20 | 2002-04-02 | 日産自動車株式会社 | 表面検査装置 |
| JP2000172845A (ja) * | 1998-12-04 | 2000-06-23 | Suzuki Motor Corp | 表面欠陥検査装置、表面欠陥検査方法及び表面欠陥検査用プログラムを記録した記録媒体 |
| US20040207836A1 (en) * | 2002-09-27 | 2004-10-21 | Rajeshwar Chhibber | High dynamic range optical inspection system and method |
| DE102004007828B4 (de) | 2004-02-18 | 2006-05-11 | Isra Vision Systems Ag | Verfahren und System zur Inspektion von Oberflächen |
| JP2006220644A (ja) | 2005-01-14 | 2006-08-24 | Hitachi High-Technologies Corp | パターン検査方法及びその装置 |
| JP4102842B1 (ja) * | 2006-12-04 | 2008-06-18 | 東京エレクトロン株式会社 | 欠陥検出装置、欠陥検出方法、情報処理装置、情報処理方法及びそのプログラム |
| JP2008234028A (ja) * | 2007-03-16 | 2008-10-02 | Seiko Epson Corp | 欠陥検出方法および欠陥検出装置 |
| JP5263291B2 (ja) * | 2008-07-18 | 2013-08-14 | 旭硝子株式会社 | 欠陥検査のための画像データの処理装置および方法、これらを用いた欠陥検査装置および方法、これらを用いた板状体の製造方法、並びに記録媒体 |
| JP5371099B2 (ja) * | 2009-05-19 | 2013-12-18 | 株式会社日本マイクロニクス | 目視検査装置と目視検査方法 |
| CN102565103B (zh) * | 2011-12-16 | 2014-03-19 | 清华大学 | 一种基于x射线图像的焊缝缺陷跟踪检测方法 |
| JP6355316B2 (ja) * | 2013-10-10 | 2018-07-11 | 英治 神谷 | 光透過性フィルムの欠陥検出方法 |
| JP5964803B2 (ja) * | 2013-12-03 | 2016-08-03 | 株式会社神戸製鋼所 | データ処理方法及びデータ処理装置 |
| JP6542586B2 (ja) | 2015-05-29 | 2019-07-10 | リコーエレメックス株式会社 | 検査システム |
| US20170277979A1 (en) | 2016-03-22 | 2017-09-28 | Inovision Software Solutions, Inc. | Identifying defect on specular surfaces |
| CN109900722B (zh) * | 2019-04-01 | 2021-08-03 | 苏州凌云视界智能设备有限责任公司 | 一种玻璃弧面图像采集方法、系统及应用 |
-
2019
- 2019-10-02 CN CN201980100819.6A patent/CN114450711B/zh active Active
- 2019-10-02 US US17/639,396 patent/US12236576B2/en active Active
- 2019-10-02 JP JP2020510134A patent/JP6756417B1/ja active Active
- 2019-10-02 WO PCT/JP2019/038908 patent/WO2021064893A1/ja not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11118731A (ja) * | 1997-10-20 | 1999-04-30 | Nissan Motor Co Ltd | 被検査面の欠陥検査方法およびその装置 |
| JP2010151824A (ja) * | 2005-01-14 | 2010-07-08 | Hitachi High-Technologies Corp | パターン検査方法及びその装置 |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112179903A (zh) * | 2020-09-30 | 2021-01-05 | 深兰人工智能芯片研究院(江苏)有限公司 | 无锁空瓶检测方法和系统 |
| CN112179903B (zh) * | 2020-09-30 | 2022-08-19 | 深兰人工智能芯片研究院(江苏)有限公司 | 无锁空瓶检测方法和系统 |
| US20230028335A1 (en) * | 2021-07-21 | 2023-01-26 | Toyota Jidosha Kabushiki Kaisha | Abnormality inspection system, abnormality inspection method and program |
| CN115700749A (zh) * | 2021-07-21 | 2023-02-07 | 丰田自动车株式会社 | 异常检查系统、异常检查方法以及程序 |
| US12469121B2 (en) * | 2021-07-21 | 2025-11-11 | Toyota Jidosha Kabushiki Kaisha | Abnormality inspection system and method for detecting abnormality of curved component using two different learning models |
| JP2023039096A (ja) * | 2021-09-08 | 2023-03-20 | トヨタ自動車株式会社 | 検査装置、検査方法およびプログラム |
| JP7669883B2 (ja) | 2021-09-08 | 2025-04-30 | トヨタ自動車株式会社 | 検査装置、検査方法およびプログラム |
| US12423793B2 (en) | 2021-09-08 | 2025-09-23 | Toyota Jidosha Kabushiki Kaisha | Inspection device, inspection method and program |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6756417B1 (ja) | 2020-09-16 |
| US12236576B2 (en) | 2025-02-25 |
| JPWO2021064893A1 (ja) | 2021-10-21 |
| US20220335586A1 (en) | 2022-10-20 |
| CN114450711B (zh) | 2025-04-29 |
| CN114450711A (zh) | 2022-05-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6756417B1 (ja) | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム | |
| JP2021056182A (ja) | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム | |
| WO2021065349A1 (ja) | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム | |
| JP2021060392A (ja) | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム | |
| JP2021056183A (ja) | ワークの表面欠陥検出装置及び検出方法、ワークの表面検査システム並びにプログラム | |
| CN106052591B (zh) | 测量设备、测量方法、系统和物品生产方法 | |
| JP4644819B2 (ja) | 微小変位計測法及び装置 | |
| CN104024793B (zh) | 形状检查方法及其装置 | |
| JPWO2016208606A1 (ja) | 表面欠陥検出装置、表面欠陥検出方法、及び鋼材の製造方法 | |
| KR20160090359A (ko) | 표면 결함 검출 방법 및 표면 결함 검출 장치 | |
| CN111551567B (zh) | 一种基于条纹投影的物体表面缺陷检测方法及系统 | |
| US11313675B2 (en) | Thread shape measuring apparatus and measuring method | |
| JP2006010392A (ja) | 貫通穴計測システム及び方法並びに貫通穴計測用プログラム | |
| CN118882520B (zh) | 一种大口径曲面光学元件表面缺陷三维检测装置及方法 | |
| US10062155B2 (en) | Apparatus and method for detecting defect of image having periodic pattern | |
| JP2017101977A (ja) | 検査システムおよび検査方法 | |
| JP5895733B2 (ja) | 表面欠陥検査装置および表面欠陥検査方法 | |
| JP2018021873A (ja) | 表面検査装置、及び表面検査方法 | |
| JP4430680B2 (ja) | 3次元寸法計測装置及び3次元寸法計測プログラム | |
| JP5087165B1 (ja) | 表面検査装置を調整するためのデータを出力する調整装置、調整データ出力方法及びプログラム | |
| Che et al. | 3D measurement of discontinuous objects with optimized dual-frequency grating profilometry | |
| JP2018115937A (ja) | 検査システム | |
| JPH1010053A (ja) | 表面欠陥検査装置 | |
| JP2020012816A (ja) | 検査システムおよび検査方法 | |
| JP2014093429A (ja) | 半導体検査装置および半導体検査方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| ENP | Entry into the national phase |
Ref document number: 2020510134 Country of ref document: JP Kind code of ref document: A |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19947580 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19947580 Country of ref document: EP Kind code of ref document: A1 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 17639396 Country of ref document: US |
|
| WWG | Wipo information: grant in national office |
Ref document number: 201980100819.6 Country of ref document: CN |