Structured illumination is a common approach to optical surface measurement. If a light source of known radiance illuminates a surface under test, the light scattered toward a detector often contains information about that surface, albeit encoded. In fact, extracting surface slope or depth via a sequence of structured illuminations is precisely possible with a well-designed illumination configuration and decoding scheme. Measurands such as parallax movement, raw intensity (scaled irradiance at the detector focal plane), or polarization ellipticity/degree of polarization, encode this surface geometry information. Examples of these methods broadly include photogrammetry, interferometry, and deflectometry.
One variation of structured illumination is phase-shifting, which encodes phase across the source. Phase is the absolute cyclic position of a phenomena over its repeating motif. In our context, a light source inherits and exhibits some sinusoidal distribution that varies from a minimum to maximum and back to its minimum, repeating this motif across the source's full spatial extent. In the example of a sinusoidal fringe being displayed across a planar illumination screen in a single direction, a single period spans
In this scenario, a temporal sequence of acquisitions where the phase of the sinusoidal pattern is shifted can be used to obtain a wrapped phase. With a fixed phase shift step size, say four steps of
Figure 1. Wrapped phase, Unwrapped phase, and typical sinusoidal motif. Phase difference shows the error in phase wrapping reconstruction with no noise, the ideal case.
Truly, the design of phase-shifting algorithms to decode sequences of phase images fascinatingly complex and stunningly elegant. There has been much research invested into phase-shifting algorithms to decode unwrapped phase, in light of the realities of measurement. This short post seeks to express some of the design elements behind what simply appears to be an arctangent calculation.
For a planar source with a sinusoidal phase pattern in one direction, each pixel posseses the following desired brightness:
If we digitally commanded the screen pixel to display ‘128’ brightness counts, we would like the radiance of that pixel to be twice that of when set at ‘64’ brightness counts. The linear scaling of this behavior is desirable, and ideal for the calculation of phase.
In PMD, a camera* will capture a sequence of images of the UUT while the illumination screen displays a sequence of phase-shifted sinusoidal patterns. From the set of images, it can calculate phase at each pixel with the N-step algorithm.
1. We want
2. Smaller values of
3. We want
In absence of screen effects, this is what we would optimize for. To resist noise, we usually increase the number of steps or the number of averages for each intensity frame image.Some other basic formalities for realistic errors and compensation, include gamma and averaging,
Increasing the number of averages, we can observe immediate effects. The RMS error goes down with phase steps, about halved.
With gamma, we can see the influence on overall phase calculation error. It is horrific, but to be fair screens usually have
First, it seems we can mitigate gamma by increasing the number of phase steps. However, if we sweep the number of phase steps, the RMS error reductions are asymptotic. Gamma compensation takes the form of finding and calibration a few coefficients for the non-linear brightness output versus input. We apply the calibrated relationship between screen input and output.
A straightforward implementation of a gamma compensation results in the comparison and times savings.
This section details some of the beautiful fourier theory behind the scenes of a phase calculation algorithm. I was specifically compelled to write this after reading Yves Surrel's Design of algorithms for phase meassurements by the use of phase shifting, in Applied Optics. Between Hariharan's, Creath's, and de Groote's famous algorithms, what is the commonality and differences? Surely an averaging of phase steps as the argument of the traditional N-Buckets technique cannot be the only way to design and differentiate between algorithms for phase misstep and noise immunity. Actually the differences have roots in characteristic polynomials.
Surrel begins by describig the recorded intensity as a Fourier series.