WO2008131474A1 - Method and apparatus for three dimensional image processing and analysis - Google Patents
Method and apparatus for three dimensional image processing and analysis Download PDFInfo
- Publication number
- WO2008131474A1 WO2008131474A1 PCT/AU2008/000538 AU2008000538W WO2008131474A1 WO 2008131474 A1 WO2008131474 A1 WO 2008131474A1 AU 2008000538 W AU2008000538 W AU 2008000538W WO 2008131474 A1 WO2008131474 A1 WO 2008131474A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- detection zone
- image detection
- images
- analysis apparatus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
Definitions
- the present invention relates to a method and apparatus for image processing.
- the invention will be described with specific reference to the image processing of bananas, however it will be appreciated that this is a non-limiting application of the invention only.
- image processing systems and methods do exist, many making use of laser triangulation to calculate three dimensional coordinates that describe the image being processed, these systems are typically not capable of analysing anything but the simplest of objects in a real time scenario.
- Such real time scenarios include the analysis of objects moving along a conveyor belt for sortation or other purposes.
- Hands of bananas provide an example of a complex object that are difficult to analyse using current image processing techniques.
- bananas are classified and sorted on the basis of, inter alia, size. This sortation is required for a number of reasons. For example, in Australia a large proportion of bananas are grown in Queensland and then exported to other states. Due to the danger of spreading fruit fly inherent in exportation, Australian authorities require that any bananas being exported from Queensland must fit within a certain size and condition. Severe penalties are imposed for non-conformance of these requirements.
- bananas are picked while still green and are ripened artificially through the use of ethylene gas. Being able to sort bananas into fruit of substantially similar size is advantageous as fruits of the same general length and thickness can be ripened more evenly. This is desirable as it helps prevent fruit from being over/under ripened which in turn provides for less wastage and damage.
- Bananas particularly when bunched together in a hand, are particularly difficulty to analyse due to the fact that individual bananas have a length, diameter and radius of curvature which is often masked by adjacent bananas.
- the highly irregular shape of bananas both in an absolute sense and in terms of the variation in shape between- individual bananas, and the fact that bananas are conveyed in hands rather than single fruit pieces, create significant difficulties for automatic image analysis.
- bananas have traditionally been, and continue to be, sorted by hand. This is a time consuming and difficult process, and workers often tend to be inconsistent in the sortation analysis and processing, as well damaging a proportion of the fruit being handled during the analysis and processing.
- the present invention provides an image analysis apparatus, the apparatus including: first and second image capture devices, each image capture device having a field of view which includes an image detection zone; at least one light source adapted to generate a series of parallel lines onto the image detection zone; positioning means for locating an object to be analysed in the image detection zone; processing means adapted to cause the first and second image capture devices to capture first and second images of the image detection zone, the first and second images comprised of the parallel lines as distorted by the object located in the image detection zone, wherein the processing means is further adapted to analyse the lines of the first and second captured images to establish the extent to which each parallel line has been distorted by the object, and the processing means further adapted to establish at least one characteristic of the object by analysis of the extent to which the parallel lines have been distorted.
- the processor means may be adapted to calculate a plurality of reference line masks during a system calibration phase, each reference line mask relating to the view of an unimpeded parallel line from one of the image capture devices, and wherein the reference line masks are used by the processor means to calculate the extent to which the parallel lines have been distorted in the first and second images.
- the first image capture device may view the image detection zone from a first known viewpoint and the second image capture device may view the image detection zone from a second known viewpoint, and the processing means may use the first and second known viewpoints to map pixels of the first and second images to a common set of coordinates.
- the processing means may combine the first and second images into a combined image in which pixel intensities are mapped to a z value, the z value corresponding to the vertical distance of the pixel away from a base plane of the image detection zone.
- the z values are used by the processing means in an edge detection algorithm to extrapolate one or more edges present in the object.
- the at least one characteristic may be selected from the group of the width of an element of the object, the length of an element of the object, and the height of an element of the object.
- the object may be a hand of bananas and the element of the object may be a single banana.
- the positioning means may be a conveyor.
- the processing means may be further adapted to control the positioning means to locate the object in the image detection zone.
- the image capture devices may selected from the group of CCD cameras and CMOS sensors.
- the light source may be a laser.
- the light source may be coupled to a line generator, the line generator adapted to generate the series of parallel lines onto the image detection zone.
- the image detection zone may be housed in an analysis chamber constructed of opaque material.
- the apparatus may further include a first detection sensor for detecting when an object is located in the image detection zone and notifying the processing means when an object is located in the image detection zone.
- the image analysis apparatus may further include a second detection sensor for detecting when no object is present in the image detection zone and notifying the processing means that no object is present in the image detection zone.
- the present invention provides a method for using an image analysis apparatus to establish at least one characteristic of an object, the method including the steps of: illuminating an image detection zone with a series of parallel lines from a light source; positioning the object within the image detection zone, causing distortion of at least one of the parallel lines in the series of parallel lines; capturing a first image of the distorted lines and a second image of the distorted lines from first and second image capture devices respectively, each image capture device having a different view of the image detection zone; analysing the lines of the first and second captured images to establish the extent to which each parallel line has been distorted by the object; establishing at least one characteristic of the object by analysis of the extent to which the parallel lines have been distorted.
- the method may further include the steps of: calculating a plurality of reference line masks during a calibration phase, each reference line mask relating to the view of an unimpeded parallel line from one of the image capture devices, and using the reference line masks to calculate the extent to which the parallel lines have been distorted in the first and second images.
- the first image capture device may view the image detection zone from a first known viewpoint and the second image capture device may view the image detection zone from a second known viewpoint, and the method may include using the first and second known viewpoints to map pixels of the first and second images to a common set of coordinates.
- the method may further include the step of combining the first and second images into a combined image in which pixel intensities are mapped to a z value, the z value corresponding to the vertical distance of the pixel away from a base plane of the image detection zone.
- the z values may be used in an edge detection algorithm to extrapolate one or more edges present in the object.
- the at least one characteristic may be selected from the group of the width of an element of the object, the length of an element of the object, and the height of an element of the object.
- the object may be a hand of bananas and the element of the object may be a single banana.
- the image capture devices may be selected from the group of CCD cameras and CMOS sensors.
- the light source may be a laser.
- the light source may be coupled to a line generator, the line generator adapted to generate the series of parallel lines onto the image detection zone.
- the present invention provides instructions executable by a computer processor to implement the method described above and to a computer readable storage medium for storing such instructions.
- the present invention will now be described with specific reference to a system and method for the analysis and processing of images of hands of bananas, and sorting > bananas on the basis of that analysis.
- Bananas have been selected as these highlight the ability of the apparatus and method to deal both with highly complex objects and objects that are composite (i.e. the analysis of individual bananas in a hand of bananas rather than a single banana on its own).
- bananas are but one example of the numerous objects and items that the method and apparatus of present invention may be used to analyse.
- Figure 1 A provides a picture of a hand of bananas
- Figure 1B provides a picture of an individual banana
- FIG. 2 provides a schematic representation of the banana analysing and sorting apparatus according to the preferred embodiment of the present invention
- Figure 3 provides a flowchart of the steps involved in transporting the bananas through the apparatus
- Figure 4A shows laser lines from the laser line generator 22 illuminating a surface unimpeded by any other objects
- Figure 4B shows laser lines from the laser line generator 22 illuminating a hand of bananas.
- Figure 5 provides a flowchart of the steps involved in the image analysis of a hand of bananas according to the preferred embodiment of the present invention
- Figure 6A depicts a reference line mask around an unimpeded laser line
- Figure 6B depicts a locus mask around a laser line
- Figure 7 provides a side view of a simplified system set up with rays from the laser line generator striking a table unimpeded by any object;
- Figure 8 shows the image captured by a camera of the scene of figure 7
- Figure 9 provides a side view representation of rays from the laser line generator as shown in figure 7 striking a single object
- Figure 10 shows the image captured by a camera of the scene in figure 9;
- Figure 11 provides representations of the camera centre and the plane of a laser line according to the setup depicted in figure 7;
- Figure 12 provides a representation of the point shown in figure 11 as seen by the camera
- Figure 13 shows a representation of the image of a single banana after analysis; and Figure 14 shows how the edges of the banana of figure 13 are calculated.
- figure 1A provides a picture of a hand of bananas 2 and figure
- 1 B provides a picture of a single banana 6.
- a hand 2 of bananas is classified according to the size of the middle or central banana 4.
- the central banana 4 is measured and these measures taken as an indicator of the general size of the hand as a whole.
- the measurements of interest in this particular scenario are the length 7 of the banana, measured along the outer curved surface of the fruit, and the width 8 of the banana, measured at or near to the thickest portion of the fruit.
- the present invention will, therefore, be described with reference to the analysis of a hand of bananas with a view to obtaining these particular measurements.
- the hand of bananas 12 which is to be analysed are transferred to a conveyor 14.
- This transfer may be directly from the back of fruit picking truck or from another location, for example an upstream conveyor transporting bananas from previous operations.
- bananas are transferred to the conveyor directly from fruit picking trucks or similar, individual hands of bananas are separated from each other along the conveyor by speeding up a section of the conveyor.
- each individual hand is conveyed 12 through an analysis chamber 16 which is positioned over a section 15 of the conveyor.
- a system control processor 11 awaits a signal indicating the chamber 16 is free and analysis of the hand can be performed.
- the section 15 of the conveyor inside the analysis chamber 16 can be stopped, started, or have its speed adjusted independently of the conveyor sections 14 outside of the analysis chamber 16.
- entry camera 18 Inside the analysis chamber 16 are two image capture devices, entry camera 18 and exit camera 20.
- the cameras 18 and 20 are mounted at either end of the analysis chamber such that their field of view includes an image detection zone 23.
- Entry camera 18 is mounted just above the entry to the analysis chamber 16 and exit camera 20 is mounted just above the exit of the chamber 16.
- CCD charged couple device
- CMOS complementary metal oxide semiconductor
- a light source in the form of a laser line generator 22.
- the laser line generator 22 is mounted so as to illuminate the image detection zone 23.
- the laser line generator 22 includes a 50 Watt diode laser 24 with a wavelength of 660 nanometers. It will be appreciated that a light source of different power and wavelength may be used, however the above parameters have been found to provide a suitable image brightness when considering the spectral sensitivity of cameras 18 and 20.
- the laser 24 is coupled to a line generator 26 which generates 19 parallel lines each with a known inter-beam angle which allows calculation of the plane in which each laser line lies.
- the number of lines used is not critical provided sufficient resolution can be obtained for the imaging and analysis as discussed below.
- the analysis chamber 16 itself is constructed out of opaque material. This prevents external light shining into the chamber 16 which aids in the analysis operations, as well as providing protection to people outside the chamber from the laser line generator 22.
- the analysis chamber 16 is further fitted with a detection sensor 28.
- the detection sensor 28 is mounted on the side of conveyor section 15 and detects when a hand of bananas is in the image detection zone 23. Once the bananas 12 are in the image detection zone 23, the detection sensor 28 detects this and sends a signal to the system control processor 11. Once system control processor 11 receives the signal from the detection sensor 28, the control processor 11 triggers the laser line generator 24 to flash as well as the simultaneous acquisition of images from cameras 18 and 20.
- the system control processor can perform the necessary image processing and analysis (as discussed below) to calculate dimensions of the fruit in the hand of bananas 12. Based on these dimensions the controller 11 then classifies the hand 12.
- the hand 12 travels along the conveyor section 15 to exit the analysis chamber 16 and rejoin the main section of the conveyor 14.
- a further sensor may be added at the chamber exit which signals the system control processor 11 when the hand leaves the chamber 16.
- the signal may be provided as soon as the required images have been acquired and processed by cameras 18 and 20.
- the hand 12 After exiting the chamber 16 the hand 12 is sorted by conveying the hand 12 to a sortation conveyor (not shown) with lanes running off to either side. Each of the side lanes is set up to accept fruit belonging to a particular classification, and a sortation controller (which may be the same controller as the system control processor 11 or a separate controller) queries the classification ascribed to the hand 12 by the system control processor 11 and uses this classification to direct the hand 12 along the appropriate side lane.
- the sortation controller may determine the correct lane, for example, by reference to a sortation table which defines which lane a particular classification has been assigned to.
- figure 3 provides an overview 40 of the general work flow of the sorting apparatus of the preferred embodiment.
- the hand 12 to be analysed and sorted are first transferred to the conveyor 14 (step 42).
- the hand 12 of bananas is then separated (step 44) along the conveyor 14 and conveyed towards the analysis chamber 16.
- the system control processor 11 sends a signal (step 46) to convey the hand 12 through the analysis chamber 16.
- the detection sensor 28 detects the position of the bananas 12 (step 48) when they reach the image detection zone and signals the system control processor 11.
- the system control processor 11 then triggers the laser line generator 22 and both cameras 18 and 20 to acquire the required images (step 50), analyses and classifies (step 52) the hand 12 based on the captured images.
- the hand 12 is then conveyed out of the analysis chamber 16 (step 54) before being sorted (step 56) according to the classification determined by the system control processor 11.
- the two measurements of primary interest for the purpose of this example are the length (measured along the outer curved surface) and width (measured at or towards the middle) of the middle banana of the hand 12.
- Figure 4A shows laser lines from the laser line generator 22 illuminating a surface unimpeded by any other objects.
- Figure 4B shows laser lines from the laser line generator 22 illuminating a hand of bananas .
- the laser line generator 22 illuminates a hand of bananas in the image detection zone 23 with a number of parallel laser lines. It will be noted how, when the parallel lines fall on the hand of bananas the lines are distorted, as viewed by one of the cameras, and it is the extent of that distortion which allows the characteristics of the bananas to be determined by suitable processing means.
- the images acquired by cameras 18 and 20 are processed and then combined to obtain a single two- dimensional image.
- this combined image pixel intensities are mapped to a Z value.
- Edge detection algorithms as are know in the art can then be used to find the edges and/or tops of the fruit, and from the detected edges the required dimensional measurements can be extrapolated.
- the cameras 18 and 20 are calibrated such that each camera has a, different view of the image detection zone 23.
- the system control processor 11 initialises the cameras 18 and 20 such that the view each camera has of the image detection zone 23 (and therefore any features within that detection zone) are mapped to coordinates based on the same real world coordinate system.
- the processor 11 calculates the source plane of each line in the image real world coordinates. Once the plane of each line has been determined the processor 11 can then determine the three dimensional coordinates of each of the parallel laser lines in the image. A single three-dimensional image is then formed based on the presence of an object in the image detection zone 23 which serves to occlude one or the other of cameras 18 and 20 view of the image detection zone 23.
- a significant problem in this form of laser triangulation is the marrying of object lines to their source.
- the line generator 26 splits the beam of the laser 24 into nineteen lines and when an object is placed in the imaged detection zone 23 of the chamber 16 an image consisting of these laser lines is formed.
- the real world dimensions of each laser line are able to be calculated.
- the data calculated during the system calibration include camera calibration matrices for each camera 18 and 20.
- the calculation of these matrices may be achieved, for example, according to the method described in Shapiro and Stockman, Computer Vision, or in Hartley and Zisserman, Multiple View Geometry.
- Calculation of the calibration matrices in turn allows the determination of coordinates corresponding to each camera's centre - a three-dimensional point that does not exist on the image plane but through which all rays of light must pass before ending up on the image plane.
- Equations of the planes that correspond to each of the laser lines are also calculated and stored at system calibration. These planes may be calculated using known geometrical techniques.
- reference images of the image detection zone 23 are captured by cameras 18 and 20.
- the image detection zone 23 is left empty, thus providing images which are used to show the expected location of unimpeded laser lines in the image detection zone 23.
- FIG. 6A depicts a reference line mask 90 around an unimpeded laser line 92.
- reference line masks are stored in an array for use during the image analysis as discussed below.
- the image detection zone 23 is configured to deal specifically with the measurement of objects of a specified maximum height. For example it may be specified that only objects up to 250mm high will need to be measured.
- a block of the specified height is placed in the image detection zone 23 and images of the block captured by cameras 18 and 20. From these images the locus of particles for each laser reference line is determined. The rectangle corresponding to the locus of particles originating from a particular laser line is determined and this is used to calculate the locus mask for each line. In the case of nineteen lines, nineteen locus masks are obtained for each camera.
- Figure 6B depicts a locus mask 94 around a laser line 96 impeded by a block of the specified maximum height. If desired, the locus mask may be calculated without placing a physical block in the image detection zone 23. In this case the locus mask is calculated by using the specified height and camera view angles.
- the locus masks are also stored in an array for use during the image analysis.
- an image has been captured by a camera that image is analysed iteratively using each of the locus masks as a sub-image in turn.
- any image particle that corresponds to an unimpeded laser line is interpreted as an indication that no object impeded that part of the laser line and therefore there is nothing of interest to analyse in that part of the image.
- particles that correspond to unimpeded laser lines are used to create exclusion corridors within the sub-image that do not require analysis.
- the detection sensor 28 detects (step 62) when the hand 12 is in the image detection zone 23 and signals the control processor 11 accordingly.
- the control processor 11 then triggers the laser line generator 22 to be operated (step 64), just prior to triggering the simultaneous image acquisition from cameras 18 and 20 (step 66). From this an entry camera image from camera 18 and an exit camera image from camera 20 are obtained.
- the laser line generator 22 is turned off. This is done to reduce the risk of laser exposure and also serves to lengthen the service life of the laser diode.
- the exposure parameters such as shutter speed and aperture for cameras 18 and 20 are selected to provide the maximum depth of field and image sharpness of the parallel laser lines.
- appropriate settings are an aperture of f2 with an exposure time of 1.24 milliseconds.
- Binary thresholding is performed on each image to convert each image to black and white (step 68).
- particles will be referred to.
- a “particle” is an individual line segment separated from other line segments. Examples of image particles may easily be seen in figure 4B where the curvature of the bananas has disrupted the laser lines to provide discrete line segments.
- each of the camera images are analysed against the reference line masks and locus masks.
- the result of this step of the analysis is the generation of a total of N (where N is the number of reference laser lines) sub-images, each of which contains only particles originating from a particular laser line, n.
- N is the number of reference laser lines
- This step allows for the reliable association of an image particle with its corresponding laser source line which, in turn, allows for the determination of real world three-dimensional coordinates.
- the generation of the sub-images is achieved by the iterative analysis of each individual camera image.
- Each iteration focuses on identifying particles originating from a particular laser line (referred to as n) according to the following steps:
- Each camera image is analysed against the reference line mask n corresponding to laser n. Any particles found within this mask are determined to be part of the unimpeded reference line and are then ignored as they do not provide relevant information on the image.
- the height (corresponding in this instance to the width of the particles and recalling that the preprocessing step involved rotation of the image to vertical) of these particles is then used to create an exclusion corridor in the locus mask for this line n.
- the rightmost or leftmost boundary particle of the scene image is iteratively identified. If the laser line is shifted to the left when impeded, the rightmost particle is identified. If the laser line is shifted to the right, the leftmost particle is identified. The following will be discussed in relation to identifying the rightmost particle as seen from the left camera (in this case entry camera 18). After the unimpeded line portions have been removed as discussed above, the locus mask is analysed for the rightmost particle. Once identified this particle is copied to the output image for line n as the particle must originate from line n. The particle is then deleted from the scene image so as not to interfere with the analysis of the scene for line n+1.
- Another exclusion corridor corresponding to this particle in light of the maximum height is created in the locus mask as no other particles originating from line n can exist to the left of this particle. This set of steps is repeated till there are no more particles found in the image.
- a size threshold may be applied to the identification of particles by filtering very small particles from the image before each iteration.
- the result of this phase of the analysis is that the number of particles that need to be analysed in each iteration is decreased which assists in speeding up the image analysis.
- each of the N output images for each camera is analysed using standard triangulation methods.
- Appropriate methods may, for example, be similar to those described in Shapiro and Stockman, 'Computer Vision'; Hartley and Zisserman, 'Multiple View Geometry';, or Trucco and Verri, 'Introductory Techniques for 3-D Computer Vision'.
- each image is analysed by calculating the intersection of the plane corresponding to laser line n with the line passing through a pixel in the image plane (whose real world three dimensional coordinates are to be calculated) and the camera centre.
- edge detection methods are used to locate the transitions in height that correspond to the edges present in the object. For example, when considering a greyscale image a point of zero intensity (black) would correspond to a height of zero, and white as the highest part of the object in the workspace. Established edge detection algorithms can then be used to pick out peaks and troughs. Further, discontinuities in the rate of change of greyscale can be used to detect the edges of the fruit.
- edges correspond to edge points of individual bananas in the hand.
- curves can be fitted to these edge points to define an estimation of the actual edge of the fruit. The point at which the two curves intersect then provide the ends of the fruit (see example below). From these edges and end points the edges the length and thickness of the fruit can be calculated by simple measurement based on pixel distance and height (intensity).
- the knowledge of which laser plane each image particle is from provides the basis for employing further processing to make meaningful groupings of particles. For example, by applying the rule that all image particles in an output image for a line n (see section 5.1(2) above) belong to different objects, logical groupings can be made for particles from laser lines n+1 and n-1. The nearest (Cartesian distance in 3 dimensions) particles from lines n+1 and n-1 may be deduced to belong to the same object (fruit) if they also fulfil a threshold distance and quality of fit metrics. These rules would typically be applied from the highest particle in the image downwards to group particles together to identify a particular object of interest (the topmost single banana for example). This grouping of particles can then be used to correctly determine the length and thickness of a fruit. Simplified system example
- Figure 7 provides a side view of a simplified system set up with rays from the laser line generator 22 striking a table unimpeded by any object (in the simplified view a limited number of laser lines only have been depicted). Only the left hand camera 18 is depicted. In a real life situation, both entry and exit cameras would typically be required in order to obtain accurate images.
- Figure 8 shows the image 101 of the unimpeded laser lines on the table as taken by camera 18.
- Figure 9 provides a side view representation of rays from the laser line generator 22 striking a single banana 100.
- Figure 10 shows the image of the laser lines falling on the banana 100 as taken by camera 18.
- the present of the banana 100 causes both breaks 102 in the original reference lines (the unimpeded laser lines shown in figure 8) as well as distortion/displacement 104 of segments of those lines from the perspective of the camera.
- the camera 18 is at an angle of approximately 45 degrees to the image, the image particles 104, as captured by the camera, appear to have shifted to the right.
- Figure 11 depicts the camera centre 106 (which is determined at calibration of the system in real world coordinates) as well as the plane 108 of a single laser line (also determined at calibration). Also shown is a single particle 110 on the banana 100 (this particle as seen by the camera is shown in figure 12).
- the real world coordinates of particle 110 are calculated from the image by determination of the intersection of the line 112 passing through the camera centre 106 and the image plane 111 and the plane of the laser line 108. Points in the image taken by the camera (such as particle 110) correspond to interrupted laser lines and are used to calculate the three-dimensional coordinates of those parts of the banana 100 intersected by the laser lines. These particles are then transformed into an image where the x, y coordinates are normalised to real world units and the z coordinate (corresponding to the real world height of the point) is proportional to the intensity of the point.
- the left hand line of the set of 19 parallel lines is analysed first (i.e. the leftmost line is line number 1 ). If that line is unbroken it can be discarded for analysis purposes and analysis can begin on line number 2, and so on.
- the first broken line (line n) must by definition be located towards the left hand edge of the banana.
- the processor then discards all those portions of line n which are unbroken, and, in that corridor where the line was broken, the processor shifts attention to the right of the image to locate the image particle from line n that has shifted to the right. Simple trigonometry enables the processor to determine the height of that particle above the surface 115 of the detection zone. Once line n has been fully analysed, the processor discards the entire line, including the shifted image particles, and turns attention to line n+1.
- Figure 13 shows a representation of the image of a single banana after analysis.
- the intensity of points along each line (corresponding to the height of various points on the line) is low at the ends of the lines and highest in the middle of the lines.
- the peaks and troughs can be used to identify the boundary of each fruit as well as the highest points on each fruit.
- Figure 14 shows how the edges of a fruit can be calculated by fitting curves 114 through the end points of the lines and determining the intersections 116 of those curves. Once the curves 114 have been fitted and intersections calculated the width of the fruit can be calculated directly across the lines, and the length can be measured across the end points (where the fitted curves intersect) and the intensity mid points of each line segment.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2008243688A AU2008243688B2 (en) | 2007-04-26 | 2008-04-17 | Method and apparatus for three dimensional image processing and analysis |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2007902205A AU2007902205A0 (en) | 2007-04-26 | Method and apparatus for three dimensional image processing and analysis | |
| AU2007902205 | 2007-04-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2008131474A1 true WO2008131474A1 (en) | 2008-11-06 |
Family
ID=39925092
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/AU2008/000538 Ceased WO2008131474A1 (en) | 2007-04-26 | 2008-04-17 | Method and apparatus for three dimensional image processing and analysis |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2008243688B2 (en) |
| WO (1) | WO2008131474A1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2578991A1 (en) * | 2011-10-06 | 2013-04-10 | Leuze electronic GmbH + Co KG | Optical sensor |
| JP2014229154A (en) * | 2013-05-24 | 2014-12-08 | 株式会社ブレイン | Article identification system and program for the same |
| JP2014229153A (en) * | 2013-05-24 | 2014-12-08 | 株式会社ブレイン | Article identification system and program for the same |
| CN104315977A (en) * | 2014-11-11 | 2015-01-28 | 南京航空航天大学 | Rubber plug quality detection device and method |
| CN111724441A (en) * | 2020-05-28 | 2020-09-29 | 上海商汤智能科技有限公司 | Image annotation method and device, electronic device and storage medium |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6072915A (en) * | 1996-01-11 | 2000-06-06 | Ushiodenki Kabushiki Kaisha | Process for pattern searching and a device for positioning of a mask to a workpiece |
| WO2006136814A1 (en) * | 2005-06-24 | 2006-12-28 | Aew Delford Systems | Vision system with picture correction storage |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPWO2006013681A1 (en) * | 2004-08-03 | 2008-05-01 | 株式会社ブリヂストン | Pneumatic bladder for safety tire |
-
2008
- 2008-04-17 WO PCT/AU2008/000538 patent/WO2008131474A1/en not_active Ceased
- 2008-04-17 AU AU2008243688A patent/AU2008243688B2/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6072915A (en) * | 1996-01-11 | 2000-06-06 | Ushiodenki Kabushiki Kaisha | Process for pattern searching and a device for positioning of a mask to a workpiece |
| WO2006136814A1 (en) * | 2005-06-24 | 2006-12-28 | Aew Delford Systems | Vision system with picture correction storage |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2578991A1 (en) * | 2011-10-06 | 2013-04-10 | Leuze electronic GmbH + Co KG | Optical sensor |
| JP2014229154A (en) * | 2013-05-24 | 2014-12-08 | 株式会社ブレイン | Article identification system and program for the same |
| JP2014229153A (en) * | 2013-05-24 | 2014-12-08 | 株式会社ブレイン | Article identification system and program for the same |
| CN104315977A (en) * | 2014-11-11 | 2015-01-28 | 南京航空航天大学 | Rubber plug quality detection device and method |
| CN111724441A (en) * | 2020-05-28 | 2020-09-29 | 上海商汤智能科技有限公司 | Image annotation method and device, electronic device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| AU2008243688A1 (en) | 2008-11-06 |
| AU2008243688B2 (en) | 2013-12-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11657595B1 (en) | Detecting and locating actors in scenes based on degraded or supersaturated depth data | |
| CN112889087B (en) | System, processing unit and method for automatic inspection of sheet material components | |
| US10999524B1 (en) | Temporal high dynamic range imaging using time-of-flight cameras | |
| CN112394064B (en) | A point and line measurement method for screen defect detection | |
| CN110246122A (en) | Small size bearing quality determining method, apparatus and system based on machine vision | |
| CN106969706A (en) | Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision | |
| AU2008243688B2 (en) | Method and apparatus for three dimensional image processing and analysis | |
| KR20020054223A (en) | An Apparatus and Method to Measuring Dimensions of 3D Object on a Moving Conveyor | |
| EP3600701B1 (en) | Seed sorting | |
| KR102027986B1 (en) | Bead recognition apparatus using vision camera and method thereof | |
| CN112334761A (en) | Defect discriminating method, defect discriminating device, defect discriminating program and recording medium | |
| Mizushima et al. | A low-cost color vision system for automatic estimation of apple fruit orientation and maximum equatorial diameter | |
| CN111602047B (en) | Tablet inspection method and tablet inspection device | |
| CN106530315B (en) | Target extraction system and method for medium and small objects under full angle | |
| US5586663A (en) | Processing for the optical sorting of bulk material | |
| CN105787429B (en) | Method and apparatus for inspecting objects using machine vision | |
| JP2010227892A (en) | Foreign matter sorting method and foreign matter sorting equipment | |
| US10228239B2 (en) | Measuring apparatus, measuring method, and article manufacturing method | |
| US7916949B2 (en) | Method of inspecting granular material and inspection device for conducting that method | |
| WO2003031956A1 (en) | System and method for classifying workpieces according to tonal variations | |
| KR102646286B1 (en) | Part inspection system | |
| CN110856847A (en) | Capacitance character detection method and device based on intelligent vision | |
| CN117611524B (en) | Express item security inspection method based on multi-source image | |
| Radovan et al. | An approach for automated inspection of wood boards | |
| JPH09318547A (en) | Appearance inspection method and apparatus for farm product |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08733365 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2008243688 Country of ref document: AU |
|
| ENP | Entry into the national phase |
Ref document number: 2008243688 Country of ref document: AU Date of ref document: 20080417 Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 08733365 Country of ref document: EP Kind code of ref document: A1 |