J12 IEEETransMeas AOWFS
J12 IEEETransMeas AOWFS
net/publication/254042215
CITATIONS READS
6 1,596
4 authors:
All content following this page was uploaded by Georg Schitter on 27 May 2014.
I. I NTRODUCTION
is based on the Shack-Hartmann WFS [15] that utilizes a
Fig. 2. Basic block diagram of a smart camera. The components of the camera These obtained slopes are related to the spatial frequencies (fx
are listed below the corresponding blocks. and fy ) of the incident plane wave at position (x, y) according
to Fourier optics theory [23].
with standardized communication interfaces (see Fig. 2). The
proposed sensor utilizes the specialized features of the two
A. Slope Calculation
processors such as high-speed parallel processing power, easy
programmability, and runtime reconfiguration. The use of an in- The image of a plane wave at normal incidence on an
dustrial smart camera takes into account key issues of industrial optically ideal thin lens (along its optical axis) leads to a sin-
equipment, such as ingress protection ratings, electromagnetic gle intensity point for geometrical optics interpretation. When
compatibility, and high robustness for in-process measurement diffraction at existing apertures is taken into account, a diffrac-
applications. tion pattern is being observed. A distortion of the inclining
In Section II, the theoretical background for the relation wavefront leads to spatially distributed tilts, given by the slope
between the displacements of the centroids and the wavefront sets (sx,ij , sy,ij ) of the wave subsets. These tilts with respect
reconstruction method is given. The FPGA algorithm presented to a plane normal to the optical axis lead to a displacement
in Section III is divided into two parts. The first part explains (Δxij , Δyij ) of the intensity pattern away from the optical
the computation of the centroid displacements of the spots. axis position on the image plane. The pixel array spatially
The second part discusses an efficient wavefront reconstruction samples and digitizes the projected intensity distribution at the
method based on a vector–matrix multiply (VMM) algorithm. sensor plane. A common technique for estimating wavefront
Section IV summarizes the synthesized FPGA algorithm and slopes from displaced intensity patterns is the calculation of the
describes the measurement results of a laboratory prototype. position of the intensity center of gravity (centroid) for each
subelement
II. T HEORETICAL BACKGROUND Δxij 1 u · Iij (u, v)
sx,ij = =
u v
− xoff,ij
f f u v Iij (u, v)
A wavefront of a given monochromatic wave phenomenon
of wavelength λ is defined as a surface composed of points of Δyij 1
u v v · Iij (u, v)
sy,ij = = − yoff,ij . (3)
equal phase of the describing field strength, originating from f f u v Iij (u, v)
the same source. Following the ideas of scalar optical theory,
the wavefront of an incident coherent optical wave can be Here, Iij (u, v) denotes the intensity value at the pixel posi-
analytically described by the phase distribution φ(x, y) of the tion (u, v) within the corresponding digitized subelement. The
complex amplitude at a given plane that is perpendicular to the displacement of the centroid is decomposed into its Cartesian
direction of wave propagation components Δxij and Δyij and set into relation to the reference
position by means of the individual offset reference values
U (x, y) = A(x, y) · exp (jφ(x, y)) (1) xoff,ij and yoff,ij . Dividing by the focal length f of the lens
array leads to the slope of the corresponding wavefront SEG.
with A(x, y) being a potential scalar amplitude distribution.
This notation is based on a simplified description of optical
B. Wavefront Reconstruction
phenomena commonly used in Fourier optics [23].
A Shack-Hartmann WFS analyzes the phase distribution The slope values sx,ij and sy,ij as given by (3) are used
φ(x, y) of the incident optical wave at the plane of a microlens to obtain the phase distribution φ(x, y) [see (1)]. Three major
array in terms of spatial distribution of the local slopes of wavefront reconstruction methods are reported in the literature,
the incident wave. The microlens array spatially divides the namely, the zonal [25] and modal [26] approaches and an
incident wave into multiple small segments (SEGs), defined by approach based on a Fourier transformation (FT) [27].
its multiple microlens elements on a periodic grid, typically In the zonal approach, not the whole wavefront is derived,
rectangular or hexagonal. When this spatial sampling process but only discrete values are calculated, which represent OPDs
fulfills the Shannon sampling theorem and the curvature of the at discrete sampling points that are given by a grid that reflects
incident wavefront is small compared to the spatial sampling the geometry of the microlens array. Commonly used grids for
width, an approximately plane wave SEG is obtained for each Shack-Hartmann sensors are the Fried [28] and the Southwell
wavefront subset. By placing the image sensor in the focal [29] grids. The first approximates the phase distribution by
plane of the microlens array at its focal distance, each wavefront means of a bilinear fit, and the latter approximates the phase
subset is analyzed by the corresponding lens element in terms distribution by a biquadratic spline fit.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 3
Fig. 5. Structure of the SCU module, calculating the slopes of the wavefront
according to (2). (FFs) Flip-flops refer to a single data register, while the FIFO
blocks contain an array of registers.
moment stage, which corresponds to (3) for a single SEG in a Fig. 6. (a) Reconstruction kernel A† is divided into subblocks Bk . All
pipelined architecture. First-in/first-out (FIFO) buffers are used vertically arranged subblocks are transferred sequentially into the internal
as intermediate storage elements for the partial sums of SEGs memory of the FPGA, multiplied by the corresponding slope subvector si , and
processed in parallel. (b) Structure of the VMM module, consisting of several
within an image row. After the final division, the displacements subblocks, processing the result vector x in parallel. RAM represents a memory
of the spots are calculated by subtracting the corresponding ref- block on the FPGA.
erence values xoff,ij and yoff,ij , which are stored in an internal
memory of the FPGA. The algorithm assumes only one spot in
the corresponding SEG, which limits the maximum acceptable
spot displacement and determines the dynamic range of the
sensor. In order to detect crosstalk between SEGs caused by
too large wavefront tilts, a parameterizable counter can be set
to invalidate the result if a SEG shows too many illuminated
pixels.
THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 5
Fig. 8. Optical setup for testing the Shack–Hartmann WFS prototype, consisting of a laser, a spatial filter, the cylindrical lens mounted on a rotational stage for
controlled aberration, and the smart camera with the microlens array. The distance between the cylindrical lens and the WFS can be varied.
TABLE I TABLE II
S YSTEM PARAMETERS OF THE I MPLEMENTED S HACK –H ARTMANN WFS FPGA U TILIZATION FOR A S PARTAN -3 XC3S1000 FOR THE
P ROPOSED S HACK –H ARTMANN WFS
Fig. 10. Reconstructed OPDs based on the slope results of the FPGA for a
wavefront due to a cylindrical lens deforming the plane reference wavefront
with rotations of (a) 0◦ and (b) 90◦ inside an area of 28 × 28 SEGs.
and the calibration values. This makes the WFS versatile and
easily reconfigurable for different measurement applications.
B. Experimental Results
An optical setup was built to test and verify the Shack-
Hartmann WFS prototype. The optical setup (Fig. 8) consists
of a 0.5-mW HeNe laser (λ = 632.8 nm) pointed at the WFS,
a polarizer for intensity regulation, and a spatial filter to flatten
the wavefront. A cylindrical lens with a focal length of 30 mm
is used to expand the beam in one dimension and introduce a
controlled aberration at an adjustable distance to the sensor.
Rotation of the lens by 90◦ around the optical axis of the system
enables a separate observation of the x and y displacements of
the spots, which are generated by the controlled wavefront aber-
ration caused by the cylindrical lens. Fig. 9 shows the subimage
of the sensor for 16 × 16 SEGs. The image without the inserted
lens [Fig. 9(a)] is used as reference for the slope calculation and
shows a regular spot grid with all focal points at the center of
the corresponding SEG. When placing a cylindrical lens with its
tangent plane aligned to the x-axis of the image sensor into the
Fig. 9. Recorded images of 16 × 16 subapertures on the CMOS sensor
for three differently aberrated wavefronts of the laser beam. (a) Plane, non- optical path, only spots along the x coordinate are affected by
aberrated wavefront. (b) Plane wavefront that is aberrated by a cylindrical lens the astigmatic aberration [Fig. 9(b)]. Fig. 9(c) shows the sensor
aligned to the x-axis of the sensor. (c) Plane wavefront that is aberrated by image after rotation of the cylindrical lens by 90◦ , resulting in
a cylindrical lens aligned to the y-axis of the sensor. The highlighted SEGs
clearly show the spreading of the spots along the alignment axis toward the rim a spreading of the spots along the y-axis, while the x-axis is
of the image. unaffected [compare to Fig. 9(a)].
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 7
Fig. 11. Reconstructed coefficients of Zernike modes j = (n(n + 2) + m)/2, from 1 to 14. The cylindrical lens clearly affects the x-astigmatism and the
defocus, where curvature x depicts an alignment of 0◦ and curvature y depicts a 90◦ rotation of the lens.
the astigmatic mode while the defocus stays the same, confirm- versity of Technology, and T. Berndorfer, W. van Dyck, and
ing the unchanged distance between the sensor and the lens. The R. Smodic from Festo AG & Company for the support and
superposition of all Zernike polynomials is shown in Fig. 12. fruitful discussions.
A comparison between the wavefront based on the Southwell
approach (black mesh) and the wavefront based on the Zernike
polynomials (colored surface) shows that the amount of weight- R EFERENCES
ing factors is sufficient for representing the cylindrical wave- [1] P. Wizinowich, “Adaptive optics and Keck Observatory,” IEEE Instrum.
front created with this setup. In addition, the modal coefficients Meas. Mag., vol. 8, no. 2, pp. 12–19, Jun. 2005.
[2] D. Neal, D. Pierson, T. O’Hern, J. Torczynski, M. E. Warren, R. Shul, and
are also advantageous when combining individual wavefront T. S. McKechnie, “Wavefront sensors for optical diagnostics in fluid me-
correction elements, with each correcting a specific aberration chanics: Application to heated flow, turbulence and droplet evaporation,”
mode. Examples are correction collars to compensate for spher- Proc. SPIE, vol. 2005, no. 1, pp. 194–203, Dec. 1993.
[3] P. Soliz, S. Nemeth, G. Erry, L. Otten, and S. Yang, “Perceived image
ical aberrations used in high-numerical-aperture microscopy quality improvements from the application of image deconvolution to
[31], [32] or a steering mirror to compensate for tip and tilt retinal images from an adaptive optics fundus imager,” in Adaptive Optics
used in AO systems to reduce the stroke of DM actuators [25]. for Industry and Medicine, vol. 102, U. Wittrock, Ed. Berlin Heidelberg,
Germany: Springer-Verlag, 2005, ser. Springer Proceedings in Physics,
The results of our prototype Shack–Hartmann WFS show pp. 343–352.
the versatility of industrial smart cameras and their potential as [4] R. K. Tyson, Principles of Adaptive Optics, 3rd ed. Boca Raton, FL:
robust and easy (re-)configurable WFS, which can be used for CRC Press, 2011.
[5] Z. Chen and S. Fu, “Optical wavefront distortion due to supersonic flow
a wide field of measurement applications. The proposed algo- fields,” Chin. Sci. Bull., vol. 54, no. 4, pp. 623–627, Feb. 2009.
rithm processes 21 elements of a result vector, such as weight- [6] M. J. Cyca, S. A. Spiewak, and R. J. Hugo, “Non-invasive mapping of
ing factors for Zernike polynomials, and only adds a latency fluid temperature and flow in microsystems,” in Proc. ICMENS, 2005,
pp. 21–26.
time of 740 ns to the image readout which is independent of the [7] C. Li, G. Hall, B. Aldalali, D. Zhu, K. Eliceiri, and H. Jiang, “Surface
size of the region of interest, suitable for real-time AO systems. profiling and characterization of microlenses utilizing a Shack–Hartmann
wavefront sensor,” in Proc. Int. Conf. OMN, 2011, pp. 185–186.
[8] S. Campbell, S. M. F. Triphan, R. El-Agmy, A. H. Greenaway, and
V. C ONCLUSION D. T. Reid, “Direct optimization of femtosecond laser ablation using
adaptive wavefront shaping,” J. Opt. A, Pure Appl. Opt., vol. 9, no. 11,
In this paper, we have presented a low-latency wavefront pp. 1100–1104, Nov. 2007.
[9] F. Bortoletto, “The TNG commissioning [Telescopia Nazionale Galileo],”
reconstruction algorithm targeting a low-cost FPGA and the in Proc. IEEE 16th IMTC, 1999, vol. 2, pp. 627–632.
experimental results of a fast Shack-Hartmann WFS based on [10] R. Ragazzoni, “Adaptive optics projects,” in Proc. IEEE 16th IMTC, 1999,
an industrial smart camera. The FPGA implementation cal- vol. 2, pp. 1112–1116.
[11] J. Porter, H. M. Queener, J. E. Lin, K. Thorn, and A. Abdul, Adaptive
culates the slopes based on a center-of-gravity spot detection Optics for Vision Science. Hoboken, NJ: Wiley, 2006.
as the basis for a VMM algorithm in order to reconstruct the [12] B. Mikulec, J. Vallerga, J. McPhate, A. Tremsin, O. Siegmund, and
wavefront. The wavefront reconstruction algorithm based on a A. Clark, “A high resolution, high frame rate detector based on a mi-
crochannel plate readout with the Medipix2 counting CMOS pixel chip,”
VMM algorithm adds only 740 ns to the image sensor readout IEEE Trans. Nucl. Sci., vol. 52, no. 4, pp. 1021–1026, Aug. 2005.
while calculating a total number of 21 Zernike coefficients or [13] F. Zappa, S. Tisa, S. Cova, P. Maccagnani, D. Calia, R. Saletti,
21 elements of a result vector in general at a field of view up to R. Roncella, G. Bonanno, and M. Belluso, “Single-photon avalanche
diode arrays for fast transients and adaptive optics,” IEEE Trans. Instrum.
30 × 30 SEGs, independent of the adjustable sensing area. The Meas., vol. 55, no. 1, pp. 365–374, Feb. 2006.
algorithm features runtime parameterization of the region of [14] N. Hubin, B. L. Ellerbroek, R. Arsenault, R. M. Clare, R. Dekany,
interest, lens-geometry factors, calibration data, and individual L. Gilles, M. Kasper, G. Herriot, M. Le Louarn, E. Marchetti, S. Oberti,
J. Stoesz, J. P. Veran, and C. Vérinaud, “Adaptive optics for
thresholds. The prototype WFS has been tested successfully in extremely large telescopes,” in Proc. Int. Astronom. Union, 2006, vol. 232,
a laboratory prototype, using a defined wavefront as input for pp. 60–85.
testing the zonal and modal reconstruction approaches. Next [15] B. C. Platt and R. Shack, “History and principles of Shack–Hartmann
wavefront sensing,” J. Refract. Surg., vol. 17, no. 5, pp. S573–S577,
work will be focused on increasing the number of computable Sep./Oct. 2001.
coefficients by systematic investigation of the necessary bit [16] G. J. Hovey, R. Conan, F. Gamache, G. Herriot, Z. Ljusic, D. Quinn,
width of the fixed point numbers while maintaining a precise M. Smith, J. P. Veran, and H. Zhang, “An FPGA based computing plat-
form for adaptive optics control,” in Proc. 1st Conf.—AO4ELT, Y. Clenet,
wavefront result. J. Conan, T. Fusco, and G. Rousset, Eds., 2009, pp. 1–6.
Future work will be directed toward the implementation of a [17] K. Kepa, D. Coburn, J. C. Dainty, and F. Morgan, “High speed optical
full AO system on the smart camera, including the WFS as well wavefront sensing with low cost FPGAs,” Meas. Sci. Rev., vol. 8, no. 4,
pp. 87–93, Jan. 2008.
as the control system to control a DM. An approach based on [18] A. Basden, D. Geng, R. Myers, and E. Younger, “Durham adaptive op-
the vector–matrix multiplication of an influence matrix could tics real-time controller,” Appl. Opt., vol. 49, no. 32, pp. 6354–6363,
directly process signals for the actuator space of a DM. It is Nov. 2010.
[19] C. D. Saunter, G. D. Love, M. Johns, and J. Holmes, “FPGA technology
expected that this will considerably reduce the delay between for high-speed low-cost adaptive optics,” in Proc. SPIE, 2005, vol. 6018,
wavefront sensing and aberration correction, which will im- pp. 429–435.
prove the bandwidth of AO systems based on a smart camera. [20] S. Lynch, D. Coburn, F. Morgan, and C. Dainty, “FPGA based adaptive
optics control system,” in Proc. IET ISSC, 2008, pp. 192–197.
[21] L. Rodriguez-Ramos, A. Alonso, F. Gago, J. Gigante, G. Herrera, and
T. Viera, “Adaptive optics real-time control using FPGA,” in Proc. Int.
ACKNOWLEDGMENT Conf. FPL Appl., 2006, pp. 1–6.
[22] R. Paris, M. Thier, T. Thurnery, and G. Schitter, “Shack Hartmann wave-
The authors would like to thank H. W. Yoo from Delft front sensor based on an industrial smart camera,” in Proc. IEEE I2MTC,
University of Technology, P. Chang from the Vienna Uni- 2012, pp. 1127–1132.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 9
[23] J. W. Goodman, Introduction to Fourier Optics, 3rd ed. Greenwood Rene Paris (S’12) received his M.S. degree in au-
Village, CO: Roberts Company Pub., 2005. tomation from the Vienna University of Technology
[24] Y. Dai, F. Li, X. Cheng, Z. Jiang, and S. Gong, “Analysis on in 2010. In addition to his studies, he worked from
Shack–Hartmann wave-front sensor with Fourier optics,” Opt. Laser 2008 to 2010 within the field of image processing
Technol., vol. 39, no. 7, pp. 1374–1379, Oct. 2007. and automation.
[25] P.-Y. Madec, “Control techniques,” in Control Techniques Adaptive Since October 2010, he is research assistant and
Optics in Astronomy. Cambridge, U.K.: Cambridge Univ. Press, 1999, performs research towards his Ph.D. at the Au-
pp. 131–154. tomation and Control Institute (ACIN), Faculty of
[26] G. M. Dai, Wavefront Optics for Vision Correction. Bellingham, WA: Electrical Engineering and Information Technology,
SPIE, 2008. Vienna University of Technology, where his research
[27] L. A. Poyneer, D. T. Gavel, and J. M. Brase, “Fast wave-front reconstruc- interests are focused on optical measurement and
tion in large adaptive optics systems with use of the Fourier transform,” J. smart camera based measurement systems.
Opt. Soc. Amer. A, Opt., Image Sci., Vis., vol. 19, no. 10, pp. 2100–2111,
Oct. 2002.
[28] D. L. Fried, “Least-square fitting a wave-front distortion estimate to an Thomas Thurner (M’02) completed his studies
array of phase-difference measurements,” J. Opt. Soc. Amer., vol. 67, in electrical engineering at the Graz University of
no. 3, pp. 370–375, Mar. 1977. Technology, with a focus on measuring and control
[29] W. H. Southwell, “Wave-front estimation from wave-front slope measure- engineering in 1999, and his doctorate in techni-
ments,” J. Opt. Soc. Amer., vol. 70, no. 8, pp. 998–1006, Aug. 1980. cal sciences at the Graz University of Technology
[30] W. van Dyck, R. Smodic, H. Hufnagl, and T. Berndorfer, “High-speed in 2004.
JPEG coder implementation for a smart camera,” J. Real-Time Image From 2000 to 2008 he worked as an assistant
Process., vol. 1, no. 1, pp. 63–68, Oct. 2006. professor at the Institute of Electrical Measurement
[31] M. Schwertner, M. J. Booth, and T. Wilson, “Simple optimization pro- and Measurement Signal Processing at the Graz Uni-
cedure for objective lens correction collar setting,” J. Microsc., vol. 217, versity of Technology. Since June 2008 he is heading
no. 3, pp. 184–187, Mar. 2005. the fatigue testing facility at the Graz University of
[32] H. W. Yoo, M. Verhaegen, M. E. van Royen, and G. Schitter, “Automated Technology.
adjustment of aberration correction in scanning confocal microscopy,” in
Proc. IEEE I2MTC, 2012, pp. 1083–1088.
Georg Schitter (SM’11) is Professor for Industrial
Automation at the Automation and Control Institute
(ACIN) in the Faculty of Electrical Engineering and
Markus Thier received his M.S. degree in automa- Information Technology of the Vienna University
tion from the Vienna University of Technology in
of Technology. His primary research interests are
2012. Since June 2012, he is research assistant and
on high-performance mechatronic systems and mul-
is working towards his Ph.D. at the Automation
tidisciplinary systems integration, particularly for
and Control Institute (ACIN), Faculty of Electrical
precision engineering applications in the high-tech
Engineering and Information Technology, Vienna
industry, scientific instrumentation, and mechatronic
University of Technology.
imaging systems that require precise positioning
His research fields are field-programmable gate combined with high bandwidths, such as scanning
array based real time measurement systems and
probe microscopy, adaptive optics, and lithography systems for semiconductor
adaptive optics.
industry.