Shape and Deformation Measurements of Large Objects by Fringe Projection
FIELD OF THE INVENTION This invention relates to a system and method for measuring the surface height and/or deformation of the surface of an object by fringe projection. The invention is applicable in particular to the shape and deformation measurement of large objects (lm2, or more).
BACKGROUND OF THE INVENTION There is a real need in industry (civil engineering, aeronautics, aerospace, naval engineering, metallurgy, the automotive industry etc.) for methods allowing the measurement of form and deformation of large objects. Even though there are numerous methods for measuring the form and the deformation of small objects (up to 1 m2), currently none of them allows the measurement of larger objects fast, and at once for many points, for example 0.5 to 1 million. Among existing techniques, fringe projection can potentially solve this problem.
Under its classical form, this technique consists in projecting rectilinear and equispaced fringes (i.e. separated by a constant period p) on an object in one direction and observing the scene from another direction, for instance with a CCD camera. The displacement of the fringes distorted by the object contains the wanted shape information. More particularly, considering the minima or maxima of the light intensity, the height z is proportional to the local change in the period of the fringe. It can be written as: z = h = (pp - pa)/tana (i) where pp is the period of the fringes as seen when projected on a plane, pa is the period of the fringes as seen when projected on the object, and α is the angle between the projection and observation directions.
In the case where only the fringe intensity is considered, the information is discrete
(local). In order to extend the wanted height information to the space between the fringes, we relate it to the phase of the fringes instead of their period. This approach offers the additional advantage of making the information acquisition and processing simpler, and more automatic.
In the case of interferometrically created fringes, the intensity / at any point (x,y,z) can be written as:
I{x, y, z) = I0 + IM - cos(ς?( , y, z)) (ii) where I0 is the background intensity,
IM is the modulation intensity, and φ(x,y,z) is the phase at a point (x, y, z) From this equation we see that the phase φ(x,y,z) contains the wanted height information. The knowledge of φ(x,y,z) for each point of the object gives an optical print of the object called phase-map.
Robinson [Interferogram Analysis: Digital Fringe Pattern Measurement
Techniques, 1993, Institute of Physics Publishing] reviews several methods to automate the acquisition and processing of fringe images, in order to obtain an optical print (i.e. phase map) of the object under investigation. Some methods are based on the intensity of the captured light, others on its phase. There are several examples of the latter, such as "phase shifting" procedure followed by "phase unwrapping" procedure, or the wavelets method, which allow to obtain the object optical print ("phase map"). However, other methods are also possible.
The principle of the phase-shifting method consists basically in moving the fringes step by step on the whole surface, by increasing the phase by a known amount. At the same time, images of these fringes are acquired for each step. For a sequence of three images, the intensity / at any point (x, y, z) can be written:
iι(χ,y,z) = I0+ IM COS( φ(χ,y>z)) I2(x,y,z) = IQ+ IM cos(φ (x,y,z)+2π/3)
(x,y
>z) = IO+ I
M c s(φ (x,y,z)+4πl3)
From these three different values of the intensity registered in each point of the images, the value of the phase φ(x,y,z) at a point (x, y, z) can be computed:
φ{x,y,z) = f(I,,I3,I2) = arctan(V-T 3 " 2 )
211 ~ *3 - 12
By doing the same for all the points on the images, we obtain the so-called "wrapped" phase map. In this intermediate optical print, the phase is discontinuous and
known "modulo 2π". At least three images are necessary to perform the phase shifting procedure, however other algorithms using a greater number of images also exist. An additional operation called "phase-unwrapping" is performed next. It consists in developing the "wrapped" phase obtained above to give a continuous "unwrapped" phase map. The latter contains the signature of the measured object shape, and hence, the needed information.
In the case of small objects (classical approach), the extraction of the shape information from this optical print is quite simple. A phase map of the object as well as a phase map of a reference plane surface is acquired. One can easily show that the height is proportional to the difference between the phase map of the measured object and the phase map of a reference plane. Thus, basically, the wanted information is obtained by subtracting these two maps one from another, thereby producing a height map.
As regards the equipment, there are mainly two possibilities for creating rectilinear fringes: either with a white-light projector (e. g. slide projector, liquid crystal display) and a grating, or with a laser and an interferometer. In the case of coherent illumination, a beam expander is used to enlarge the illumination zone, leaving it collimated. In both cases, the result is a family of plane parallel "sheets" of light in the volume surrounding the object under investigation.
For larger objects, the above approach is no longer possible. Indeed, for dealing with large objects, the illuminating beams must be divergent. In addition, the depth of field suggests the use of interferometrically-created fringes. Hence, the projected fringes are not rectilinear, parallel and equidistant. Instead, surfaces of equal phase difference are hyperboloids and consequently the fringes created are curved and no more equidistant. All this corresponds to a mathematical relation between the height z and the phase φ that is not linear as in the classical case. Because of this, problems of data treatment arise. Furthermore, a "once-for-all" calibration of the measurement volume becomes useless because both the observation and the illumination positions have to be adapted to the object size and the available space. In addition, even if under certain assumptions the relation between the between the height z and the phase φ can be simplified and made to look like the one for the classical case, there exists no reference plane as a large as the object under investigation.
Therefore, another way of calculating the height z must be found, and another approach based on new algorithms must be developed.
State of the Art for Measuring Large Objects The article [« Numerisation 3D, technologies et besoins », Contrόle Industriel, No. 217, February 1999, p. 22-49], reviews the most currently used techniques for measuring shape of objects in the industry. Among these, the ones that could be applied to large objects are: theodolites, telemetry, laser scanners and close-range photogrammetry. The three first ones (theodolites, telemetry, laser scanners) give only local and discrete information, and the measurement must be repeated (scan) to obtain the overall shape information. This implies a non-simultaneous measurement of the points on the object surface. Close-range photogrammetry is a more global technique, allowing to measure many points at the same time, however, there are two major drawbacks. First, since the surfaces measured in industry usually have no texture (not visible on the images), targets have to be placed on the object surface before measurement to mark the points where the height information will be collected. This implies two things: a relatively long preparation time of the surface and a discrete (i.e. limited) number of points where the height information is known. The second major disadvantage of photogrammetry is the long time for data treatment (grows with the number of points considered for the measurement). In conclusion, none of the methods usually applied in industry allow the measurement of larger objects fast, and at once for a great amount of points (about 500 000 points or more).
One potential technique, as discussed above, is fringe projection. Several set-ups are commercially available for example from Steinbichler Optotechnik GmbH, see www.steinbichler.de; Breuckmann GmbH, see www.breuckmann.com; BIAS GmbH, D- 28359 Bremen, Germany; GOM GmbH, Braunschweig, Germany see www.gom.com; and Dr. Ettenmeyer GmbH and Co, New Ulm, Germany. However, mainly due to calibration problems, very few people have applied this technique to larger surfaces (above 1 m2). There are several examples where these systems were used to measure objects of larger size by measuring small portions of the objects and then matching them ("patchwork"). See for example GOM GmbH, Braunschweig, Germany or www.gom.com; and Reich C, Ritter R., Thesing J., 3-D measurement of complex objects by combining photogrammetry and fringe projection, Optical Engineering, 39, p.224-231 (2000). However this approach implies that the measurement must be repeated, and that the measured areas must be precisely adjusted together before getting the overall shape information. Therefore this approach has two drawbacks: the time for measurement and
data treatment increases, and the overall precision decreases, compared to the precision of a single measurement on a small area.
As mentioned by Chen [Chen M., Brown G. M., Song M., Overview of three- dimensional shape measurement using optical methods, Optical Engineering, 39, 10-22 (2000)], Lehmann et al. were the first to propose a solution for measuring objects several square meters large, getting height information on as much as 500 000 points at the same time, see the article "Shape Measurement on Large Surfaces by Fringe Projection", Lehmann et al, Experimental Techniques, 1999, vol. 23, no. 2, pp 31-35. This work however concerned the shape measurement of large relatively flat surfaces of several tens of square metres, and recognised that the application of fringe projection to large surfaces poses particular problems about the system calibration, and in particular for the projection head calibration. An improvement in projection head calibration has been proposed and developed by Desmangles et al. ["Large Object Shape Measurement using Coherent Light Fringe Projection: a new Approach for Calibration", Desmangles et al, 3rd Topical Meeting on Optoelectronic Measurement and Application, Sept. 20-22 2001, University of Pavia, Italy]. It is based on a new projection head made of two mutually tuned interferometers. The basic assumptions of these two methods limit their application only to flat objects and to a given measurement configuration where the object is parallel to the camera imaging- plane and centred on the camera axis. This prior art thus highlights the problems of applying fringe projection to large objects of any shape, and does not disclose any effective way of producing measurements of the shape and/or deformation of such surfaces in a rapid and convenient way.
Further prior art relates to measuring the surface coordinates of small objects. For example, WO 00/26615 and the article: Schreiber W., Notni Q., heory and arrangements of self -calibrating whole-body three-dimensional measurement systems using fringe projection technique, Optical Engineering, 39, p.159- 169 (2000) describe a device for determining the spatial coordinates of small objects placed on a rotatable table with one or two projectors and one or several detection cameras, arranged for limited relative movement along fixed trajectories. In this method, the test object is successively illuminated with white-light, with two grating sequences perpendicular to each other from at least two different directions, resulting in surplus phase value for each measurement point. Furthermore, this method relies on taking several views from different angles.
EP- A- 1,067,362 describes a fringe projection method mainly for dewarping images of small objects like curved pages of a book. Here a discrete three dimensional profile
(information available only for lines, not for the whole surface) of the surface is calculated relative to a reference surface, which is the support surface for the book, using a projector and a camera that are co-incident.
The scaling up of the latter two methods for the measurement of large objects would be problematic.
SUMMARY OF THE INVENTION An object of the invention is to provide a system and device for measuring the surface height and/or deformation of the surface of objects that can be large (more than lm2) and of different shapes (not necessarily planar), by fringe projection with projection of the fringes and viewing of the object from any convenient viewpoint and angle, with simplified calibration and with the possibility of rapid on-site production of a virtual image of the object showing its surface height and/or deformation, as required. In one aspect, the invention proposes a system as set out in claim 1 for measuring the surface height and/or deformation of the surface of an object by fringe projection, wherein the surface of the object is defined in terms of x, y and z coordinates to be determined.
The system comprises a projection head (projector) for projecting a divergent beam of light onto the surface of an object at a given angle to produce a periodic pattern of fringes on the object. The projection head preferably projects coherent light, for example two substantially superposed beams of coherent light, using two point sources. White light projection is less preferred because of its inherent limitations. The projection head can for example be made of a modified Mach-Zehnder interferometer, whereby two slightly tilted laser beams enter the same microscope objective to create the two point sources and project fringes on the measured object.
An imaging device is arranged to view the object from another angle and input images of the reflected light of the fringes, these images of the fringes of light being projected on a 2-D pixel array. The imaging device advantageously comprises a solid state camera, for example a CCD -Charged Couple Device- camera comprising a silicon chip consisting of a rectangular array of light sensitive cells backed by circuitry to move the
image that the cells record onto magnetic memory. Other types of imaging device with a 2-D pixel array can also be used.
Means are provided to extract pixel position (i, j) and phase information φ(i,j) associated with each pixel position from at least one image of the fringes to produce an optical print of the object containing phase information together with image- coordinates (|,η), the phase information and image coordinates both being associated with the pixel positions.
The phase information is physically related to the intensity of light at each measured point, and depends on the fringe generation process. In the most usual and preferred case, the fringe generation process depends on the relative position (rx,ry,rz) of two point sources and the relative position of the object and projector head. The relation between the intensity and phase is given by equation (ii).
The image-coordinates (ξ,η) are related to the pixel position (i,j), as will be described in greater detail below. The phase-information φ corresponding to a certain point of the surface with object coordinates (x,y,z) is projected on the 2D array on a given pixel position (i,j) corresponding to the above-mentioned object-coordinates (x,y,z).
The means to extract phase information is usually arranged to provide an unwrapped phase-map from at least one intensity image of the fringes. The system also includes a theodolite or other means for establishing the system parameters related to the position in space of the projection head and the imaging device relative to the object, for any given spatial relationship of the projection head, imaging device and object. The elements of the system (camera, projection head) can each be freely positioned and adjustable in order to optimise the measurement of any particular object according to both the object and its environment. For this reason, it is necessary to establish the system coordinates for any given relative position of the system elements.
Finally, the system comprises a processor arranged to calculate the object- coordinates x, y and z of each measured point of the object by combining the image- coordinates (ξ,η) and the phase information φ corresponding to the object-coordinates through mathematical functions describing x, y and z, these mathematical functions including system parameters describing the spatial configuration and specifications of the
system, and established by calibration procedures. Then the processor produces a virtual representation of the height and/or a deformation of the surface.
These mathematical functions are the general solutions of a second set of equations describing the system (configuration and specifications). This second set of equations can include object-coordinate functions and a phase function, wherein the object-coordinate functions expresses the x and y object-coordinates of the surface of an object in terms of measured image-coordinates (ξ,η) (corresponding to the pixel positions (i,j) in the pixel array corresponding to said x and y object-coordinates), the z object-coordinate, and system parameters; and the phase function expresses the phase information at any point (with object-coordinates x, y and z) of the surface of the object in terms of the x, y and z object-coordinates and system parameters.
In the system and method of the invention, the set-up is described by a phase equation and the central projection equations. Solving them simultaneously allows to determine mathematical expressions for the object-coordinates {x,y,z) of all measured points from the optical print (unwrapped phase map). This approach is general and offers the advantage to allow the measurement of shape and deformation of large objects. It also makes the system more flexible.
In a system operating with coherent light the processor is for example arranged to resolve simultaneously the following equations:
{x ,z)
- φ(x,y,z) is the phase at point P (x,y,z),
- Rx, Ry, Rz, rx, ry, rz are parameters describing the projection head;
- λ is the wavelength of the coherent light;
- ξ, η are the image-coordinates corresponding to point P(x,y,z);
- ξ0, η0 are the image-coordinates of the central point; - c is the principal distance of the modelling the imaging device;
- Xo, Yo, Z0 are the object-coordinates of the perspective center; and
- rk, are the elements of the rotation matrix describing the orientation of the imaging plane with regard to the object-coordinates. They depend on the three rotation angles: ω, K, φ (k, 1 = 1 to 3). In a variation, also using coherent light, the processor is arranged to resolve a set of equation containing a more general phase equation:
2π φ(χ,y, ,z) = —U(Rx + rx -xy + {Rv + rv -yy + (Rz + rz - zy λ &
-^{Rx - xY + {Ry - yY + {Rz - z)2 ] (iii)
In another aspect, the invention proposes a method for measuring the surface height and/or deformation of the surface of an object by fringe projection, wherein the surface of the object is defined in terms of x, y and z coordinates to be determined, as set out in claims 9-14. The system and method of the invention have been used successfully to measure the shape and/or deformation of large objects of any size and shape, with measurements to an accuracy of about 1/1000 the object's largest dimension /diameter. These measurements can be made on objects of different materials, metal, plastic, concrete, fabric, etc.
The system and method of the invention are particularly advantageous for large objects, but can also be used for small objects.
The system and method of the invention are very flexible as regards making the measurements. The elements of the system can be placed in convenient positions, hence can be adapted to the object and its environment.
The system and its measurements can be calibrated without making use of a reference surface. The method produces the wanted information for a dense cloud of points of the surface.
With this method, it is now possible to produce rapidly in a single go, and on-site, a virtual representation of the height and/or a deformation of the surface using available processing equipment, which was not possible with previous fringe projection systems.
Another aspect of the invention is a processor of a system as set out above, that is programmed to produce a virtual representation of the height and/or a deformation of the surface of a measured object by combining the image-coordinates and the phase information corresponding to the object-coordinates through mathematical functions describing x, y and z, said mathematical functions including system parameters for given configuration and specification of the measurement system, all as further described herein. The processor is typically a PC with a screen display.
The invention also concerns a computer program which, when loaded in a processor of the system, is operable to produce a virtual representation of the height and/or a deformation of the surface of a measured object by combining the image-coordinates and the phase information corresponding to the image-coordinates through mathematical functions describing x, y and z, said mathematical functions including system parameters for given configuration and specification of the measurement system, all as further described herein.
The system's computer program(s) can be written in any programming language capable of performing the algorithms based on the given mathematical equations (for solving the set of equations modelling the system, defining thereby the expressions for x,y,z, for calibrating the system and for x, y and z calculation). Examples of suitable programming languages are C++, "Matlab", and Visual Basic. These programs would typically be stored on the hard disk of a PC.
BRIEF DESCRIPTION OF THE DRAWINGS
In the accompanying schematic drawings, which are given by way of example: Figure 1 is a diagram showing the principle of fringe projection; Figure 2 is a perspective view of the layout of a system according to the invention; Figure 3 is a diagram of a projection head based on a modified Mach-Zehnder interferometer;
Figure 4 shows photographs illustrating the phase shifting procedure, which induces a shift of the fringes and produces of a wrapped phase map;
Figure 5 shows the development a wrapped phase map producing an unwrapped phase map;
Figure 6 is a diagram describing the classical approach for producing a virtual representation of a surface; Figure 7 is a diagram of the theory underlying the invention, showing the fringe projection and the central projection (modeling the "imaging acquisition" by the camera);
Figure 8 is a diagram illustrating the modelization of the measurement system in terms of «virtual» and «real » worlds;
Figure 9 is a diagram illustrating different calibration procedures necessary to determine the measurement system parameters;
Figure 10 is a diagram illustrating the raw-data pre-treatment;
Figure 11 is a block diagram of the different steps for calculating the object shape (object-coordinates of the measured point);
Figure 12 is a diagram illustrating an evaluation procedure of the method of the invention;
Figure 13 shows an example of shape measurement, namely an image of a windsurf board; and
Figure 14 is a diagram illustrating the deformation measurement and the improved flexibility of the system in terms of measurement configuration.
DETAILED DESCRIPTION
Figure 1 schematically shows the basic principle of fringe projection. Under its classical form, this technique consists in projecting rectilinear and equispaced fringes 30 (i.e. separated by a constant period p) on an object 10 in one direction and observing the scene from another direction, for instance with a CCD camera. The displacement of the fringes 30 distorted by the object 10 contains the wanted shape information. More particularly, considering the minima or maxima of the light intensity, the height z (h) is proportional to the local change in the period of the fringe. As discussed above, it can be written as: z = h = (pp - pa)/ tana (i) where pp is the period of the fringes as seen when projected on a plane; pa is the period of the fringes as seen when projected on the object ; α is the angle between the projection and observation directions.
Figure 2 schematically shows the set-up of a system according to the invention for measuring the surface height and/or deformation of the surface of an object 10 by fringe projection, the points on the object 10's surface being defined in terms of x, y and z coordinates. A projection head 20 (hereinafter "projector") projects a divergent beam of light 21 onto the surface of object 10 at a given angle to produce a periodic pattern of fringes 30 on object 10. A CCD camera 40 is arranged to view the object 10 from another angle and input images of the reflected light of the fringes. These images are projected on the CCD camera's 2-D pixel array.
As explained below in conjunction with Figs. 4 and 5, phase information is extracted from the images of the fringes 30, to produce a discontinuous wrapped phase map 35. The latter is transformed in a continuous unwrapped phase map (optical print) 36 of the object containing phase information.
A theodolite 60 is provided for establishing system parameters related to the system specifications and configuration (ie. the position in space of the projector 20 and the CCD camera 40 relative to the object 10), for any given spatial relationship of the projector, camera and object. For this purpose, a series of calibration points 11 are disposed on the object 10' s surface, and the coordinates of these points are measured by the theodolite 60 along with the coordinates of projector 20 and camera 40. Other means than a theodolite can be used. A processor 50, connected to the CCD camera 40 and to the projector 20 is arranged to produce (after raw data pre-treatment, calibration procedures and object- coordinates calculations - see Figure 11) a virtual representation 15 of the height and/or a deformation of the object 10' s surface by combining the image-coordinates (ξ, η) and the phase information φ(ξ, η) corresponding to the object-coordinates through mathematical functions describing x, y and z, as described below in conjunction with Fig. 7 and 8. These mathematical functions include system parameters related to the system specifications and configuration. The processor is for example a suitably programmed PC whose screen can be used to display the representation of the measured object's surface.
The projection head 20 is mounted on a free-standing adjustable support allowing fine tuning of the head's position. The camera 40 and theodolite 60 are mounted on freestanding adjustable supports, such as tripods 41, 61, which can freely be placed on the ground at selected positions and adjusted in height. The camera 40 can thus be positioned
relative to any given object, in any convenient configuration for taking the measurement. The various elements 20,40,60 of the system are hence independent and can be installed according to both the object 10 and its environment.
A preferred form of projection head (projector 20) is shown in Fig. 3. The basic requirements for the projection head are that it should be: sturdy, so it can easily be transportable without having to do adjustments each time; compact; have simple and fast adjustments; easy phase shifting; easy variation of fringe pitch (precision) in order to adapt it easily to the object 10 to be measured, and to the incidence angle; and minimum influence of dust and aberrations. The projector 20 shown in Fig. 3 projects two substantially superposed beams of coherent light 21,21. It comprises a beam splitter 22 by means of which an incident laser beam L is split in two and then reflected by mirrors 24,26. Initially, one beam "goes through" the beam splitter 22, the other one is reflected by the beam-splitter. Finally, the two beams are passed through a lens 28 (e. g. a microscope objective) to produce the divergent beams 21,21 ' from point sources S1,S2. By displacing mirror 26 (arranged for producing phase shift - piezoelectric), the phase of one of the divergent beams 21,21' is shifted step by step. Consequently, this produces a shift, step by step, of the fringes projected on the object. For each position of the fringe (each step), an image is acquired by the CCD camera 20. The resulting images are shown in Fig. 4 at 30a, 30b and 30c. Fig. 4 shows by ways of example the principle of the phase-shifting method that consists in moving the fringes step by step on the whole surface, by increasing the phase by a known amount, while at the same time acquiring images of these fringes for each step.
For a sequence of three images, the intensity / at any point (x, y, z) can be written:
I0 + IM C S( φ(x,y,z)) I0+ IM cos(φ (x,y,z)+2τd3)
Io+ I
M cos(φ (x,y,z)+4π/3)
From these three different values of the intensity / registered in each point of the images, the value of the phase φ(x,y,z) at any point (x, y, z) can be computed:
φ{x,y,z) = f(I,,I3,I2) = arctan(V-T 3 " 2 )
2*1 ~ A3 ~ l2
By doing the same for all the points on the images, we obtain the so-called "wrapped" phase map 35. In this intermediate optical print, the phase is discontinuous and known as "modulo 2π". At least three images are necessary to perform the phase shifting procedure, and other algorithms using a greater number of images also exist.
Fig. 5 illustrates an additional operation called "phase-unwrapping" that is performed next. It consists in developing the "wrapped" phase 35 obtained above to give a continuous "unwrapped" phase map 36. This optical print of the object 10 is an image in grey levels where each pixel (i, j) contains the phase information φ. The phase map 36 contains the signature of the measured object shape, and hence, the needed information.
Fig. 6 illustrates the way of getting the height information from phase maps when the classical approach (e.g. in the case of small objects) is used. The extraction of the shape information from this optical print is quite simple. Indeed, the height is proportional to the difference between the phase map of the measured object and the phase map of a reference plane surface. More precisely, the object 10 is measured by fringe projection, phase- shifting and unwrapping, giving respectively a wrapped phase map 35 and an unwrapped phase map 36. Similarly, the measurement of a reference plane surface R10 gives an unwrapped phase map R35 and R36 of the reference surface. Then, basically, the wanted information is obtained by subtracting the two unwrapped phase maps 36 and R36 one from another, producing a height map 15.
In the case of larger objects, this simple approach cannot be used anymore. Indeed, for dealing with large objects, the illuminating beams must be divergent. In addition, the depth of field suggests the use of interferometrically-created fringes. Hence, the relation between the phase and the height is not proportional and another approach must be used. Figure 7 is a diagram illustrating the system and method of the invention based on a general approach. The different elements in presence are the two point sources SI and S2, the object 10, and the CCD camera 40. Any point P of the object surface is designated by its object-coordinates (x,y,z)
Fringes of light produced by the two point sources S1,S2 of coherent light are projected on the object surface (operation PI).
The object 10 and the fringes 30 projected on its surface are imaged by the camera 40 (operation P2). According to the central perspective equation, a given point P(x, y, z) of
the surface is projected on the imaging plane of the camera ( e.g. a CCD pixel array) at a corresponding point (ξ, η). ξ and η are called the image-coordinates of point P.
When projecting the fringes 30 of light on the object 10 by operation PI, we add information, namely the phase φ(x, y, z), at each point of the object surface. Indeed, since the fringes are produced by interferometry their intensity observed at the object surface in any point (x, y, z) is related to the phase φ(x, y, z) by equation (ii). This equation shows that it is the phase that contains the wanted height information. Actually, according to the laws of optical interferometry, the phase is physically related to the position of each point P(x,y,z), since it is proportional to the difference of the distances between the point P(x,y,z) and the two point sources S1,S2 (see equation (iii)).
Thus, after applying the operation PI, we can consider a "4D" space where each point P is coded by its object-coordinates and the phase: P(x, y, z, φ(x, y, z)). The phase is determined, through phase shifting and phase unwrapping procedures, for each point P of the surface, and when imaged by the camera 40, this point P is projected at its corresponding point (ξ, η, φ(ξ, η)) of the imaging plane (e.g. CDD pixel array). It can be considered as a part of an image in "3D" space, where each pixel corresponding to the image-coordinates (ξ, η) contains a phase value φ(ξ, η). The phase φ(ξ, η) is related to the phase φ(x, y, z) by: φ(ξ, η) = C-φ(x, y, z), where C is a constant depending on camera specifications. Thus if (ξ, η, φ(ξ, η)are known (by measurement and by reading a given pixel of the CCD array), and if the operations PI and P2 are fully defined for given measurement configuration and specifications of the system, then the object coordinates can be determined for each corresponding point (x, y, z). Hence, the object shape is known. This is what we do by using a model of the measurement system as illustrated by Figure 8. Figure 8 describes the modelization of the measurement system, and the procedures to recover the object-coordinates (x,y,z) of each measured point. The model in this approach is based on two levels: the "real" (physical) world, where the system and the object exist and where the measurement takes place, and a "virtual" world. In the latter, the system, object and measurement are described by mathematical concepts. In particular, the measurement system is represented by a set of mathematical equations 100 corresponding to equation (iv) (see below). One equation describes the phase φ of the light and thus corresponds the fringe projection (PI); since here the fringes are created by interferometry,
we call it the interferometry equation. The two other equations correspond to the camera (P2), and are called equation of central projection.
According to the laws of optical interferometry, the phase φ(x, y, z) of the light at any point P(x, y, z) of the surface is proportional to the difference of the distances between that point and the two points sources. It is given by equation (iii). If the distance between the two point sources is several order of magnitude smaller then the distance between the sources and the object, the interferometry equation can be simplifed, as shown by Lehman et al. in the article "Shape Measurement on Large Surfaces by Fringe Projection", Lehmann et al, Experimental Techniques, 1999, vol. 23, no. 2, pp 31-35. Thus the whole system can be described by the following set of equations:
- φ(x,y,z) is the phase at point P (x,y,z), - Rx, Ry, Rz, rx, ry, rz are parameters describing the projection head 20;
- λ is the laser wavelength;
- ξ, η are the image-coordinates corresponding to point P(x,y,z);
- ξ0, η0 are the image-coordinates of the principal point;
- c is the principal distance of the central projection modeling the camera 40; - Xo, Yo, Z0 are the object-coordinates of the perspective center; and
- ri are the elements of the rotation matrix describing the orientation of the imaging plane with regard to the object-coordinates referential. They depend on the three rotation angles: ω, K, φ (k, 1 = 1 to 3).
Here, Rx, Ry, Rz, rx, ry, rz, are called interferometric parameters; ξ0, η0and c are the internal orientation parameters of the camera 40; X0, Y0, Z0 and ω, K, φ are the external orientation parameters of the camera 40;
- x, y and z are the wanted object-coordinates ,
- ξ, η are the image-coordinates corresponding to point P(x,y,z) ,
- φ(x, y, z) is the phase at point P, and ξ, η, φ are determined from reading each pixel of the CCD camera 2D array.
This system of non-linear equations is solved giving general expressions 110 of x, y, z, as follows:
x = x(ξ,η,φ;R
x, R
y, R
z, r
x, r
y, r
z, λ, ξ
0, η
0, c, X
0, Y
0, Z
0, ω, K, φ) y = y (ξ,η,φ;R
x, R
y, R
z, r
x, r
y, r
z, λ, ξ
0, η
0, c, X
0, Y
0, Z
0, ω, K, φ) z = z (ξ,η,φ;R
x, R
y, R
z, r
x, r
y, r
z, λ, ξ
0, η
0, c, X
0, Y
0, Z
0, ω, K, φ)
Calibration procedures 120 (described in more detail with reference to Fig. 9) give the value of the system parameters: Rx, Ry, Rz, rx, ry, rz, λ, ξo, η0, c, X0, Y0, Z0, ω, K, φ. For this purpose, information 70 (system configuration and specifications) and 80 (phase, object- and image-coordinates) about the calibration points 11 is used. Thus, expressions 130 of x, y, and z specific to the system configuration and specifications 70 (wavelength of the light, camera specifications, etc..) are obtained:
As described above, a measurement is carried out through phase-shifting and phase unwrapping procedures giving a phase-map 36. This optical print of the object 10 is an image in grey levels where each pixel (i, j) contains the phase information φ. Furthermore, each pixel (i, j) is related to the image-coordinates (ξ,η). Thus, reading a pixel (i, j) gives the information (ξ,η, φ). This allows to calculate the object-coordinates (x, y, z) (step 40) of each point of the measured surface, by using the set of equations (v). Once this is done
for all the points of the object surface, a height map 15 corresponding to the wanted shape information can be represented using an adequate program.
Figure 9 is a diagram illustrating the calibration 120 of the system. More precisely, it describes measurements and procedures used to determine the parameters of the measurement system.
The interferometric parameters 150 are Rx, Ry, Rz, and rx, ry, rz. Rx, Ry, Rz, are measured using the theodolite 60. rx, ry, rz are determined using calibration points for which the phase and object-coordinates are known and least square calculations minimizing the difference between the measured phase values and theoretical phase values given by the interferometry equation.
The wavelength λ is given by the laser specifications.
The external orientation parameters 160 are determined using calibration points 11 , for which the image- and object-coordinates are known, and least square calculations applied to the equations of central projection, after linearization. Approximate values of Xo, Y0, Zo, ω, K, φ are entered in the program, and then optimised by iterations.
The internal orientation parameters 170 (ξn, η-, c) are determined by measuring a calibration object from different points of view with the camera 40 and using bundle adjustment calculations [Kraus K., Waldhaύsl P., Manuel de Photogrammetrie: principe fondamentaux, Hermes, 1998]. Commercially available programs exist for this procedure, for example Photomodeler™ available from EOS Systems.
Here the interferometric, external and internal orientation parameters 150, 160, 170 are determined separately using programs based on known least squares calculations. However, other approaches are also possible. For example, it is possible to determine the external and internal orientation parameters at the same time by using appropriate algorithms. In addition, at this stage, lens distortions could be determined and corrected if judged necessary (they have been omitted in the development above, for simplification).
Figure 10 illustrates different procedures necessary to transform the raw data into adequate data, expressed in the right units and referential, and corresponding to the mathematical equations developed. In particular, the phase measured directly from the phase map is expressed in grey levels and must be transformed in radians to be suitable to the mathematical formula describing the phase. A linear relation between the phase expressed in grey levels and the phase expressed in radians allows to do so. This is shown in Figure 10(a). Similarly, the coordinates of the calibration points 11 must be transformed
from the measurement referential (e.g. the theodolite referential) to the right object- coordinates referential, related to the central projection model, as illustrated by Figure 10(b). The same is true for the pixel positions (i,j) of the calibration points 11, initially expressed in integers (pixels). They must be expressed in units of length (e.g. mm), and transformed to suit the right image-coordinates referential (ξ, η) defined by the central projection model. This is represented in Figure 10(c).
Figure 11 sums-up the different procedures necessary before being able to calculate the object-coordinates (x, y, z) of each measured point. As illustrated, the data acquisition (a) involves on the one hand fringe projection (by the projection head 20); image acquisition by the CCD camera 40, phase shifting procedure (see Fig. 4 for details), and measurements using the theodolite 60 (position of calibration points, of the camera and of one of the two point sources). On the other hand it involves a program (e.g. Photomodeler™) for measuring and determining data relative to the specifications of the CCD camera 40, and the necessary user interfaces for performing all the measurements. Most of the procedures carried out by the user can be automated using appropriate programs.
The data acquisition (a) leads to the production of raw data as illustrated in Fig. 10(b), namely: the wrapped phase map 35, position ("raw object-coordinates") of the calibration points 11, position of the perspective center, and one of the light sources S1,S2, the internal orientation parameters (ξ0, η0, c), the image size (in millimetres and in pixels), the pixel positions (i, j) of the calibration points on images of the measured object and a first approximation of the angles of the rotation matrix (ω, K, φ).
The raw data is supplied as illustrated in Fig. 11(c) to programs (in processor 50) for data pre-treatment. These programs carry out the phase unwrapping procedure 35/36 and the object-and image- coordinates transforms. The different programs can be gathered together in one master program.
This produces treated data as indicated in Fig. 11(d), namely the unwrapped phase map 36, the object- and image-coordinates ((x, y, z) and (ξ, irrespectively) of the calibration points 11, a first approximation of the perspective center (X0, Y0, Z0), and the coordinates (Rx, Ry, Rz) of one of the point sources SI, S2. In the described examples of the invention, the internal orientation parameters (ξ0, ηj, c), and the external orientation
parameters (ω, K, φ) are used as such (i.e. as determined in step (a) without additional transformation).
As illustrated in Fig. 11(e), the treated data is processed by programs for calibration determining the missing system parameters, namely: the interferometric parameters (rx, ry, rz) and optimised values of the external orientation parameters (X0, Y0, Z0, ω, K, φ). At this stage, Fig. 11(f), all the system parameters are known: the interferometric parameters: Rx, Ry, R2, rx, ry, rz, λ, and the internal and external orientation parameters of the camera: ξ0, η0, c, X0, Yo, Z0, ω, K, φ.
Lastly, as indicated at Fig. 11(g) and 11(h), these parameters are used in programs based on the set of equations (v) to determine the object-coordinates (x,y,z) of all the measured points of the object 10.
Figure 12 illustrates an evaluation procedure used to assess the newly developed method according to the invention. Retroflective targets are fixed on the object 10 and will be considered as the calibration points 11. Their object-coordinates x, y and z are measured using the theodolite 60 and will serve as reference values. Then they are determined using the method according to the invention and evaluated at these calibration points. For the purpose of evaluation, we consider the divergence ("errors") between the coordinates measured with the theodolite ("reference") and the coordinates computed using the inventive method. These errors are represented in Figure 12 and are defined as follows: • the "z-error" or height-error: dz = z meth0d - theodolite,
• the in-plane error: d^ = j(x
method - x
theodolltef + (y
method ~ y
theodohtef the total error:
The invention is further described in the following Examples.
Examples A 5 W NdNanadate laser yielding green light (λ=532 nm) was used as the source of light. The projector 20 and the CCD camera 40 (752 x 582 pixels, 8 bits, CCIR) are placed in front of the object 10, as illustrated in Figure 2, making a non-zero angle between the observation direction (camera axis) and the projection direction. The spacing between S 1 and S2 is about 200 microns (typical values of this spacing range within a few hundred microns). The distances between, on the one hand the object 10 and the projection head 20,
and on the other hand the object 10 and the camera 40, are typically of the order of several meters.
The phase-map of the object under investigation is registered according to the phase-shifting and phase-unwrapping procedures described with reference to Figures 4 and 5.
The positions of the projector 20 {Rx, Ry, Rz), of the camera 40 (first approximation of Xo, Y0 and Z0) and of several points 11 (retro-reflective targets fixed on the object) on the object's surface are measured with a distance-meter theodolite 60. Once aimed at a point, it is able to register the distance to this point as well as the angles of measurement. From these data it computes the raw object-coordinates of the measured point. This simplifies the measurement, compared to working with two theodolites by triangulation intersection. A first approximation of the precision, is given by the accuracy of the theodolite 60 as specified by the manufacturer, which is ±1 mm (for a distance from 1 to 100 m). For these measurements, the vertical axis of the object-coordinates system is given by the theodolite's vertical axis (y-axis). The other axes (x- and z- axis) are determined using a reference point to fix the theodolite orientation. The origin is arbitrarly fixed at any given point (more or less at the center of the imaged object 10), and the orientation of the referential is set once for all.
The internal orientation parameters of the CCD camera 20 can be determined before or after these measurements, provided that the specifications of the camera are kept unchanged. A first approximation of the camera's rotation matrix angles (external orientation parameters ω, K, φ) is done by the user.
The phase map measurement, the theodolite measurements and the internal orientation parameters determination can be done in any order. Then, the image-coordinates of the calibration points are read from the positions of the retro-reflective targets on the images of the object 10 under investigation. For image measurement, the pictures are displayed on a computer screen. A simple cursor serves for the measurement of the image-coordinates of the calibration points, with a precision of ±1 pixel. All these raw data (some as such and other after pre-treatment) are used to calibrate the measurement system, and then to compute the object-coordinates x, y and z of all the points on the surface. Finally, a program displays the 3-D height map (shape) of the object under investigation. All these procedures are depicted in Figure 8 to 11.
Several objects 10 were measured as set out below.
Windsurf Board A windsurf board (3.2 x 1 m) was fixed vertically on a wall. The imaging plane of the camera 40 is placed roughly parallel to the object 10, at a distance of about 3 m. The projection head 20 was positioned 6 m away on the right side, making a projection angle of about 60°, with regard to the observation direction. Eight retroreflective targets 11 are fixed on the windsurf board surface and define the calibration points, and their position was measured with theodolite 60. Then a fringe projection measurement is performed in order to capture the windsurf-board's phase-map 35. Knowing the physical and pixel coordinates of the reference points 11 as well as the phase at their position allows us to calibrate the system parameters. In this example, the internal parameters determination was carried out before the other measurements.
Figure 13 shows by way of example a representation of the shape measurements of the windsurf board obtained by the process according to the invention, as can be displayed on a conventional PC screen.
Table I presents the resulting values computed for the calibration points (PI to P8) according to the evaluation procedure described with reference to Figure 12: the object- coordinates as measured with the theodolite {xtheod., ytheod., Ztheod , as determined by the inventive method {xmeas., ymeas., zmeas ), the total error d, the height-error dz the in-plane error dxy. As can be seen, the total errors remains within a few millimetres, mainly due to in- plane errors. The height errors are within 1/1000 of the largest object size, which corresponds to the known rule-of-thumb limit for fringe projection techniques.
Basically, the main sources of errors are the three input variables, namely: the image-coordinates (ξ,η) and the phase φ. In addition, preliminary studies showed that in- plane errors are more affected by errors on the image-coordinates, and height errors by errors on the phase (note that since the value of the phase depends on the image- coordinates (ξ,η)or equivalently on the pixel position (i,j), an error on the image- coordinates generate an error on the phase). Hence to decrease all the errors, the lens distortions should be computed and corrected, and a camera with more pixels should be used in order to reach a better spatial resolution (here, about 3.5 mm/pixel). Finally, since phase-maps can be acquired within a few seconds, several ones should be measured, filtered and averaged to decreases local errors on the phase.
TABLE I
Out-of-plane deformation measurement of a beach umbrella A generally rectangular beach umbrella was placed vertically and its position kept fixed for all the following measurements. The projection head 20's position as well as camera 40 were also kept fixed. For all the measurements, the projection head 20 is at 6 m away on the right, and the camera 40 was positioned slightly at the right of the umbrella making an angle a little bit larger than 90° with the projection direction. The measurements were done for different configurations of the beach umbrella. Its initial position is fully open (its size is about 2 x 1.5 m), then for each following configuration, it is closed by steps, sliding the supporting structure on the shaft, by about 30 cm between two measurements. Twelve calibration points 11 formed by retro-reflective targets are fixed on the umbrella and their position is measured with theodolite 60. Then a fringe projection measurement is performed in order to capture the beach umbrella's phase-map 35, for each configuration. Knowing the physical and pixel coordinates of the reference points as well as the phase values at their position allows us to calibrate the system parameters.
Tables II and III below shows the results for calibration points of the beach umbrella in two configurations: totally opened (Table II) and the second when the umbrella is slightly closed (30 cm; Table III). The total errors remain generally within a few millimetres. However, they are higher in Table III mostly due to higher height errors dz.
As said in the Example above, the in-plane errors d^ are due to the low spatial resolution of the camera, to reading errors arising while manually determining the pixel position of the calibration points, and to non-corrected lens distortions. Higher height errors are due to local errors in the phase map (indeed, as said in the Example above, the height error at a point depends mainly on the phase value).
A solution to decrease those height errors dz would be to acquire several phase maps of the object (fast procedure), to filter and average them. In-plane errors dxy, could be lowered by using a camera with more pixels to reach a better spatial resolution (here, about 3.5 mm pixel), by computing and correcting the lens distortions, and by automating the target location (i.e. the reading of the calibration points positions on the image of the object 10).
These results (Tables II and III) show for the first time the feasibility of the measurement by fringe projection of large out-of-plane deformation on large objects.
TABLE II
TABLE III
Measurement in any configuration Figure 14 illustrates the different configurations adopted to measure an aluminium plate (2 m x 1.5 m). For these measurements, the plate 10 and projection head 20 are kept at the same position and the camera 40 was placed in three different positions designated Configuration 0, 2 and 4 respectively. After measurement, the resulting values for the calibration points are presented in Tables IV, V and VI for the measurements performed in Configurations 0, 2 and 4 respectively (see Figure 14). These results show for the first time the feasibility of the measurement in different configurations and the improved flexibility of the system.
TABLE IV
The explanations for the errors d, dz and d^ and how to decrease them is the same as described for the previous Examples.
These measurements show the flexibility of the system of the invention allowing to adapt its configuration to the object 10 under investigation and to the site, and to measure deformations of the object (Tables II and III). Immobility of the object 10 (e.g. the umbrella) during the measurement is required. Using a laboratory installation, the theodolite and fringe projection can be carried out in well under 15 minutes depending on the degree of automation. The whole procedure produces height information for about 500 000 points or more at the same time.
Table VII below compares the method of the invention (labelled "Invention") with the prior art method (labelled "LSI") described in ["Shape Measurement on Large Surfaces by Fringe Projection", Lehmann et al, Experimental Techniques, 1999, vol. 23, no. 2, pp
31-35]. The results show the total error, the in-plane error and the height error averaged for
all calibration points of five different objects measured in exactly the same configuration, with the same materials and in the same object-coordinates referential. With the method of the invention, the total error is approximately divided by 10, the height error by 40 and the in-plane error by 2.
TABLE VII
All the above results show that the method of the invention works for objects of different shape and type of surface (provided that it is diffusive, i.e. not specularly reflective). Furthermore, it is possible to adapt the system configuration to the object under investigation and to the measurement site, to a large extent (Tables IV- VI). In addition, it is possible now to measure large out-of-plane deformations (Tables II-III). No such measurements were possible with prior-art fringe projection systems and methods. Finally, in most cases, the precision is better than for prior art methods, as demonstrated by Table VII. It can be seen from the foregoing that the described system and method according to the invention allow the fringe projection method to be extended to the measurement to large objects that have any shape using any measurement configuration. More precisely, they allow:
Measurement of the shape of large objects (any general shape, and not only generally flat objects);
- Measurement of the deformation of large objects;
- A greater flexibility of the measurement system (possibility to adapt the measurement system to the object under investigation, and to do the measurement in any configuration); - To obtain the height information at a great amount of points (500,000 and above) at the same time;
According to a rule of thumb in the field of fringe projection, attainment of a precision to about 1/1 000 to 1/10 000 of the largest object size; and - Potentially fast measurement: the resulting shape is obtainable in a few minutes with adequate automation. In principle there is no upper limitation on the size of the objects to be measured by the method of the invention. It is just a matter of choosing an adequate source of light with enough power to illuminate the whole of the object's surface to be measured, and a camera whose 2-D pixel array has a sufficient size to provide an adequate spatial resolution. In addition, the object's environment should allow enough space to take an image of the whole illuminated object's surface in one shot.
Many modifications of the described embodiments of the system and method are possible and the system/method can be used for many applications, other than those described.
In a variation, the system's processor 50 can be connected to the projection head 20 and camera 40 via a telecommunications network, for example the Internet whereby the measurement of an object can be taken on site and the results processed at a remote location. Alternatively, measurements can be recorded on a storage medium such as a standard CD that can be inserted in a PC at any location.