0% found this document useful (0 votes)
1 views10 pages

J12 IEEETransMeas AOWFS

The document presents a low-latency Shack-Hartmann wavefront sensor (WFS) utilizing an industrial smart camera for high-performance wavefront deformation measurements. The proposed system achieves a processing latency of only 740 ns, enhancing the temporal resolution for adaptive optics applications. It leverages field-programmable gate arrays (FPGAs) for efficient parallel processing, making it suitable for real-time control in various optical measurement systems.

Uploaded by

Shaugar Dolley
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views10 pages

J12 IEEETransMeas AOWFS

The document presents a low-latency Shack-Hartmann wavefront sensor (WFS) utilizing an industrial smart camera for high-performance wavefront deformation measurements. The proposed system achieves a processing latency of only 740 ns, enhancing the temporal resolution for adaptive optics applications. It leverages field-programmable gate arrays (FPGAs) for efficient parallel processing, making it suitable for real-time control in various optical measurement systems.

Uploaded by

Shaugar Dolley
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/254042215

Low-Latency Shack-Hartmann Wavefront Sensor Based on an Industrial


Smart Camera

Article in IEEE Transactions on Instrumentation and Measurement · May 2012


DOI: 10.1109/I2MTC.2012.6229194

CITATIONS READS
6 1,596

4 authors:

René Paris Markus Thier


In-Tech Engineering TU Wien
7 PUBLICATIONS 36 CITATIONS 15 PUBLICATIONS 123 CITATIONS

SEE PROFILE SEE PROFILE

Thomas Thurner Georg Schitter


Montanuniversität Leoben TU Wien
36 PUBLICATIONS 365 CITATIONS 352 PUBLICATIONS 7,394 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Georg Schitter on 27 May 2014.

The user has requested enhancement of the downloaded file.


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 1

Low-Latency Shack–Hartmann Wavefront Sensor


Based on an Industrial Smart Camera
Markus Thier, Rene Paris, Student Member, IEEE, Thomas Thurner, Member, IEEE, and
Georg Schitter, Senior Member, IEEE

Abstract—Wavefront sensing is important in various optical


measurement systems, particularly in the field of adaptive optics
(AO). For AO systems, the sampling rate, as well as the latency
time, of the wavefront sensors (WFSs) imposes a restriction on the
overall achievable temporal resolution. In this paper, we propose
a versatile Shack–Hartmann WFS based on an industrial smart
camera for high-performance measurements of wavefront defor-
mations, using a low-cost field-programmable gate array as the
parallel processing platform. The proposed wavefront reconstruc-
tion adds a processing latency of only 740 ns for calculating wave-
front characteristics from the pixel stream of the image sensor,
providing great potential for demanding AO system designs.
Index Terms—Algorithms, cameras, field-programmable gate Fig. 1. (a) Wavefront is spatially sampled by the microlens array, forming
arrays (FPGAs), image sensors, optical distortion. (b) spots on an image sensor. The (red) spots of an inclined wavefront are
displaced from the (gray) spots of an ideally plane wavefront.

I. I NTRODUCTION
is based on the Shack-Hartmann WFS [15] that utilizes a

I N MANY OPTICAL imaging systems, the image quality is


diminished by undesired aberrations. Wavefront aberrations
occur when light passes through optical heterogeneous paths,
microlens array to segment the incoming wavefront onto an
active sensing area, as shown in Fig. 1. Due to aberrations,
the focused spots are displaced within the focal plane of the
such as turbulent atmosphere [1] or heating haze [2]. A mea- corresponding microlens from the ideal focal point, defined by
sured wavefront contains information of the optical path differ- a reference wavefront, such as an ideally plane wavefront. The
ences (OPDs) with respect to, e.g., an ideally plane wavefront, displacement of these easily detectable and spatially locatable
spatially distributed over the sensing area. Such wavefront spots is proportional to the mean inclination of the wavefront
deviations are classified as wavefront aberrations throughout fragment, serving as input for a mathematical estimation of the
the following discussion. The measurement of the aberration total wavefront shape. The time-consuming wavefront recon-
can be used to characterize the translucent media or increase struction can be parallelized for high performance, such as by
the imaging resolution. To improve the image quality, one can the use of field-programmable gate arrays (FPGAs).
either use subsequent image processing [3] or actively manipu- With their highly parallelized and programmable architec-
late the optical path in real time with, e.g., a deformable mirror ture, FPGAs offer several advantages over standard digital sig-
(DM) [4] as widely applied in adaptive optics (AO) systems. nal processors and CPUs, such as real-time capability through
Wavefront sensing is used in various application domains such a parallel working scheme, flexibility and efficiency in data
as aero-optics measurements [5], fluid measurements [6], char- word length, highly efficient I/O interfaces, and less power
acterization of optical components [7], laser optics [8], astron- consumption at comparable performance [16]. Recent contribu-
omy [1], [9], [10], and medical imaging [11]. Recent works deal tions report the usage of FPGAs in Shack-Hartmann WFS for
with the development of high-speed wavefront sensors (WFSs) displacement calculation [17], [18], wavefront reconstruction,
[12], [13], such as for extremely large telescopes [14]. or least squares reconstruction for DM control [19], [20] and as
Wavefront aberrations can be measured in several ways, part of a closed-loop AO system [21].
leading to various designs of WFSs. A common technique In a previous work, we showed the usage of a versatile
industrial camera as Shack-Hartmann WFS and its internal
Manuscript received June 29, 2012; revised August 29, 2012; accepted FPGA as preprocessing stage for wavefront reconstruction [22].
September 6, 2012. The Associate Editor coordinating the review process for
this paper was Dr. George Xiao.
In the following, we describe a further development of our
M. Thier, R. Paris, and G. Schitter are with the Automation and Con- WFS to achieve a higher processing performance as needed for
trol Institute, Faculty of Electrical Engineering and Information Technol- real-time control AO applications. In particular, we focus on a
ogy, Vienna University of Technology, 1040 Vienna, Austria (e-mail: thier@
acin.tuwien.ac.at; [email protected]; [email protected]). low overall latency, an important factor for closed-loop systems
T. Thurner is with the Institute of Lightweight Design, Graz University of [18], and versatility through reconfiguration to allow a broad
Technology, 8010 Graz, Austria (e-mail: [email protected]). field of applications. In this context, the term “smart camera”
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. is used for an encapsulated and versatile camera design with an
Digital Object Identifier 10.1109/TIM.2012.2223333 FPGA, close to the image sensor, and an embedded processor
0018-9456/$31.00 © 2012 IEEE
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT

of local spatial frequency, corresponding to the slopes of the


wave subsets with respect to normal incidence. Following [24],
the incident slope sets (sx,ij , sy,ij ) for each grid element (i, j)
are directly related to the partial spatial derivative of the phase
distribution of the optical field
λ ∂ λ ∂
sx,ij = φ(xij , yij ) sy,ij = φ(xij , yij ). (2)
2π ∂x 2π ∂y

Fig. 2. Basic block diagram of a smart camera. The components of the camera These obtained slopes are related to the spatial frequencies (fx
are listed below the corresponding blocks. and fy ) of the incident plane wave at position (x, y) according
to Fourier optics theory [23].
with standardized communication interfaces (see Fig. 2). The
proposed sensor utilizes the specialized features of the two
A. Slope Calculation
processors such as high-speed parallel processing power, easy
programmability, and runtime reconfiguration. The use of an in- The image of a plane wave at normal incidence on an
dustrial smart camera takes into account key issues of industrial optically ideal thin lens (along its optical axis) leads to a sin-
equipment, such as ingress protection ratings, electromagnetic gle intensity point for geometrical optics interpretation. When
compatibility, and high robustness for in-process measurement diffraction at existing apertures is taken into account, a diffrac-
applications. tion pattern is being observed. A distortion of the inclining
In Section II, the theoretical background for the relation wavefront leads to spatially distributed tilts, given by the slope
between the displacements of the centroids and the wavefront sets (sx,ij , sy,ij ) of the wave subsets. These tilts with respect
reconstruction method is given. The FPGA algorithm presented to a plane normal to the optical axis lead to a displacement
in Section III is divided into two parts. The first part explains (Δxij , Δyij ) of the intensity pattern away from the optical
the computation of the centroid displacements of the spots. axis position on the image plane. The pixel array spatially
The second part discusses an efficient wavefront reconstruction samples and digitizes the projected intensity distribution at the
method based on a vector–matrix multiply (VMM) algorithm. sensor plane. A common technique for estimating wavefront
Section IV summarizes the synthesized FPGA algorithm and slopes from displaced intensity patterns is the calculation of the
describes the measurement results of a laboratory prototype. position of the intensity center of gravity (centroid) for each
subelement
  
II. T HEORETICAL BACKGROUND Δxij 1 u · Iij (u, v)
sx,ij = =  
u v
− xoff,ij
f f u v Iij (u, v)
A wavefront of a given monochromatic wave phenomenon   
of wavelength λ is defined as a surface composed of points of Δyij 1
u  v v · Iij (u, v)
sy,ij = = − yoff,ij . (3)
equal phase of the describing field strength, originating from f f u v Iij (u, v)
the same source. Following the ideas of scalar optical theory,
the wavefront of an incident coherent optical wave can be Here, Iij (u, v) denotes the intensity value at the pixel posi-
analytically described by the phase distribution φ(x, y) of the tion (u, v) within the corresponding digitized subelement. The
complex amplitude at a given plane that is perpendicular to the displacement of the centroid is decomposed into its Cartesian
direction of wave propagation components Δxij and Δyij and set into relation to the reference
position by means of the individual offset reference values
U (x, y) = A(x, y) · exp (jφ(x, y)) (1) xoff,ij and yoff,ij . Dividing by the focal length f of the lens
array leads to the slope of the corresponding wavefront SEG.
with A(x, y) being a potential scalar amplitude distribution.
This notation is based on a simplified description of optical
B. Wavefront Reconstruction
phenomena commonly used in Fourier optics [23].
A Shack-Hartmann WFS analyzes the phase distribution The slope values sx,ij and sy,ij as given by (3) are used
φ(x, y) of the incident optical wave at the plane of a microlens to obtain the phase distribution φ(x, y) [see (1)]. Three major
array in terms of spatial distribution of the local slopes of wavefront reconstruction methods are reported in the literature,
the incident wave. The microlens array spatially divides the namely, the zonal [25] and modal [26] approaches and an
incident wave into multiple small segments (SEGs), defined by approach based on a Fourier transformation (FT) [27].
its multiple microlens elements on a periodic grid, typically In the zonal approach, not the whole wavefront is derived,
rectangular or hexagonal. When this spatial sampling process but only discrete values are calculated, which represent OPDs
fulfills the Shannon sampling theorem and the curvature of the at discrete sampling points that are given by a grid that reflects
incident wavefront is small compared to the spatial sampling the geometry of the microlens array. Commonly used grids for
width, an approximately plane wave SEG is obtained for each Shack-Hartmann sensors are the Fried [28] and the Southwell
wavefront subset. By placing the image sensor in the focal [29] grids. The first approximates the phase distribution by
plane of the microlens array at its focal distance, each wavefront means of a bilinear fit, and the latter approximates the phase
subset is analyzed by the corresponding lens element in terms distribution by a biquadratic spline fit.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 3

Fig. 3. Proposed wavefront reconstruction algorithm is divided into a (Tr)


threshold module, a (SCU) module for slope calculation, and a (VMM) module
performing a vector–matrix multiplication. An external synchronous dynamic
random access memory (SDRAM) stores the matrix elements. The application
can be parameterized through the control interface with the configuration
module.

The modal approach uses superimposed orthogonal modes


weighted by independent expansion coefficients for the wave-
front approximation, such as Zernike polynomials [26]. If only Fig. 4. SEG (a) definition and (b) timing diagram for the wavefront calcula-
a few specific modes are of interest, such as astigmatism, coma, tion. The actual delay between (EoF) end of frame and (EoC) end of calculation
is the latency time tL , comprising the time tc for slope calculation, the time tv
defocus, and spherical aberration, the computational effort can for performing the VMM operation, and the time tt for transferring the result
be reduced by limiting the reconstruction to these specific to the output.
modes. The FT-based approach comprises a forward FT of
the slope calculations, an inverse filter applied in the spatial ule for parameterizing the application. To cope with irregular
domain, and an inverse FT. illumination, an individual static threshold value is set for every
Zonal and modal methods are based on a similar mathemat- SEG that corresponds to a microlens. In the following sections,
ical structure and, therefore, are chosen for a first implementa- the SCU and VMM algorithms will be described in detail.
tion on the industrial smart camera. Both approaches result in a The analyzed measurement data are relayed via direct mem-
set of linear equations, also referred to as VMM, which can be ory access (DMA) to the embedded processor, where a C++
represented by program sends the data via Ethernet to an external PC for
further processing.
s = Ax (4)
where s is the slope measurement vector of dimension m and A A. Slope Calculation Unit (SCU)
is an m × n coefficient matrix, where m and n depend on the The 2-D pixel array of the image sensor is divided into a fixed
selected sampling grid configuration or on the number of used grid of squared SEGs, matching the size of a microlens. The
modal basis modes. The vector x is of size n and represents the individual slope components sx,ij and sy,ij , as given by (3),
OPDs at given sampling points or gain factors of the orthogonal are calculated in parallel to the image transfer from the image
modal functions, respectively. In general, m > n, which leads sensor to the FPGA. In Fig. 4(a), the relation between the spatial
to an overdetermined system. If rank(A) < n, the inverse can pixel position and the individual SEGs can be seen. Fig. 4(b)
be calculated using singular-value decomposition, leading to shows the corresponding timing diagram for a single image
the Moore–Penrose pseudoinverse A† = VΣ† UT , where U frame with exposure time texp and transfer time ttransfer (upper
and V are orthogonal matrices of dimensions m × m and n × panel) and the time for the proposed wavefront calculation
n, respectively, and Σ† contains the reciprocal singular values (middle panel). The memory transfer timing (lower panel) will
of A on its main diagonal. The inverse problem eventually is be described in Section III-B. The image acquisition starts with
solved by the initial trigger T0 , followed by the exposure time of the
sensor. The pixels of an image frame are transferred row by
x = VΣ† UT s = A† s. (5) row from the image sensor to the FPGA as a serial pixel stream.
When the last pixel of a SEG is reached [the gray-highlighted
pixels in Fig. 4(a)], a corresponding trigger (T11 to Tij ) is set
III. I MPLEMENTATION
in order to start the division of the sums in a parallel operating
In the following, a parallelized centroid detection and wave- stage (middle panel). The processing time tc for a single slope
front reconstruction algorithm is presented. Fig. 3 shows an consists of the processing time for the last partial sums and
overview of the complete algorithm, comprising a threshold the division of their final results. This time mainly depends on
filter Tr to reduce readout noise of the image sensor, a slope the chosen fixed point precision determining the complexity
calculation unit (SCU) module for slope calculation, a VMM of the division. Fig. 5 shows the inner structure of the row,
module for wavefront reconstruction, and a configuration mod- column, and overall intensity summing unit and the first
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT

Fig. 5. Structure of the SCU module, calculating the slopes of the wavefront
according to (2). (FFs) Flip-flops refer to a single data register, while the FIFO
blocks contain an array of registers.

moment stage, which corresponds to (3) for a single SEG in a Fig. 6. (a) Reconstruction kernel A† is divided into subblocks Bk . All
pipelined architecture. First-in/first-out (FIFO) buffers are used vertically arranged subblocks are transferred sequentially into the internal
as intermediate storage elements for the partial sums of SEGs memory of the FPGA, multiplied by the corresponding slope subvector si , and
processed in parallel. (b) Structure of the VMM module, consisting of several
within an image row. After the final division, the displacements subblocks, processing the result vector x in parallel. RAM represents a memory
of the spots are calculated by subtracting the corresponding ref- block on the FPGA.
erence values xoff,ij and yoff,ij , which are stored in an internal
memory of the FPGA. The algorithm assumes only one spot in
the corresponding SEG, which limits the maximum acceptable
spot displacement and determines the dynamic range of the
sensor. In order to detect crosstalk between SEGs caused by
too large wavefront tilts, a parameterizable counter can be set
to invalidate the result if a SEG shows too many illuminated
pixels.

B. Vector Matrix Multiply (VMM)


Due to the fixed sampling geometry, the reconstruction ma-
trix A† [see (5)] can be calculated in advance. In order to reduce Fig. 7. Slope calculation is applied on a squared surface (region of interest),
while the wavefront reconstruction algorithm only includes slopes within the
the internal memory consumption, A† is stored in an external circular boundary (reconstruction area). The computed aperture is linked to the
SDRAM, and only parts of it are transferred into the memory region of interest and can be smaller than the aperture of the microlens array.
of the FPGA right before calculation. The matrix A† is divided
into k × i submatrices Bk of dimension m × 2j, as shown in given by cc . Fig. 6(b) shows the FPGA logic of a subblock,
Fig. 6, where j is the row index of a SEG and m is a factor, calculating m elements of the result vector x by means of time
depending on the internal timing. This factor is m ≤ u/cp , multiplexing of two parallel multipliers for the x and y slopes,
where u is the size of a SEG in pixels and cp is the number respectively. A FIFO buffer stores the sum of the elements of
of pixels computed at each clock cycle. Note that u/cp is the the result vector. The complete result vector x can be computed
number of clock cycles between two adjacent slope values. In by several subblocks running in parallel. The time tv (Fig. 4)
general, the dimension n of the result vector x is greater than j, required to finish the vector-matrix multiplication for each
yielding to k submatrices Bk , which are processed in parallel. subblock is determined by the number of elements m which
The data transfer from the SDRAM into the internal memory are processed sequentially and the time needed for the multipli-
of the FPGA is shown in Fig. 4(b) (lower panel) and triggered cation itself. Finally, the result vector is transferred sequentially
by Li . The data transfer must be completed before the slope to the output, given by the transfer time tt .
calculation si1 of the first SEG in each image row ends. Assum-
ing a quadratic size u of each SEG, k can be calculated by
IV. R ESULTS
u(u − 1) u+c
k= + . (6) A Festo R1B industrial smart camera (Festo AG & Com-
2cc cp m m2jcc
pany, Esslingen am Neckar, Germany) serves as the platform
The number of clock cycles required for the transfer of for the proposed WFS algorithm. A detailed description by
a single matrix element, referred to as bxmj and bymj , is the manufacturer of this camera can be found in [30]. The
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 5

Fig. 8. Optical setup for testing the Shack–Hartmann WFS prototype, consisting of a laser, a spatial filter, the cylindrical lens mounted on a rotational stage for
controlled aberration, and the smart camera with the microlens array. The distance between the cylindrical lens and the WFS can be varied.

TABLE I TABLE II
S YSTEM PARAMETERS OF THE I MPLEMENTED S HACK –H ARTMANN WFS FPGA U TILIZATION FOR A S PARTAN -3 XC3S1000 FOR THE
P ROPOSED S HACK –H ARTMANN WFS

smart camera consists of a CMOS image sensor (Micron


MT9V403, Micron Technology Inc., Boise, ID), a Spartan- the codomain of the reconstruction matrix A† is normalized to
3 XC3S1000 FPGA (Xilinx Inc., San Jose, CA), and an [−1, 1]. The point spread function of the subaperture, given
embedded processor (PXA255 XScale, Marvell Corporation, by the individual microlens, covers several pixels of the image
Santa Clara, CA). A stripped-down Linux operating system sensor and allows the calculation of the slopes with subpixel
runs on the smart camera and offers easy access to indus- resolution. The minimum resolvable spot displacement is 1/64
trial interfaces such as Ethernet and controller area network of a pixel when six fractional bits are used for the slope
bus. The image sensor with a maximum resolution of 640 × calculation. For the matrix elements, 22 bits are used. Following
480 pixels is clocked at 60 MHz, acquiring 186 frames per (6), with m = 7 defined by half the width of a SEG, cp = 2 as
second at full resolution. Higher frame rates can be achieved the number of pixels concurrently processed, and cc = 2 due
by reducing the height of the image, given by the num- to the 16 bit data interface of the SDRAM, the reconstruction
ber of image rows. The standard objective lens is replaced matrix is processed by three parallel working VMM subblocks.
by a rectangular lens array with a pitch size of 150 μm, The algorithm is designed to handle a matrix with 37 800 ele-
a focal length of 10 mm, and a circular aperture of about ments, which equal to 21 coefficients for a maximum number
4.5 mm (Flexible Optical B.V., Rijswijk, The Netherlands). The of 30 × 30 SEGs. Table II shows the resource usage of the
aperture trims the incoming wavefront and confines the lens synthesized FPGA design, divided into three main modules,
array to approximately 600 lenses, and thus image SEGs, that namely, peripheral, SCU, and VMM. The peripheral module
are projected onto the CMOS sensor. The system parameters of contains logic to interface peripheral components such as the
the sensor prototype are listed in Table I. image sensor, the SDRAM, and the embedded processor. The
SCU module uses hardware multipliers for fast calculations.
The FIFO buffers are based on block RAMs. The memory
A. FPGA Design
blocks of the VMM module use two block RAMs. The bit width
Since circular apertures are commonly used, the wavefront of the VMM data requires two hardware multipliers for each
must be analyzed within a circle. The sensor restricts the com- operation, resulting in four needed multipliers for each VMM
putation to rectangular regions of interest, so a square region subblock. The algorithm is designed to run at a clock speed of
has been chosen, which covers the whole circular aperture, as 100 MHz, leading to a processing time for the slope calculation
shown in Fig. 7. The SCU module computes the slopes of all of tc = 400 ns, a processing time of tv = 130 ns to finish the
SEGs within the region of interest, while the VMM module VMM operation, and a transfer time of tt = 210 ns of the result
considers only SEGs within the circle. The size of a SEG vector. The overall latency time is therefore 740 ns (cf. Fig. 4),
directly depends on the selected lens array and the pixel pitch given by the required 74 clock cycles. The program allows for
of the image sensor. The codomain of the slope calculation is runtime parameterization of the region of interest, the size and
limited by half the width of a SEG to a maximum of [−8, 8]; focal length of the lens array, the individual threshold values,
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT

Fig. 10. Reconstructed OPDs based on the slope results of the FPGA for a
wavefront due to a cylindrical lens deforming the plane reference wavefront
with rotations of (a) 0◦ and (b) 90◦ inside an area of 28 × 28 SEGs.

and the calibration values. This makes the WFS versatile and
easily reconfigurable for different measurement applications.

B. Experimental Results
An optical setup was built to test and verify the Shack-
Hartmann WFS prototype. The optical setup (Fig. 8) consists
of a 0.5-mW HeNe laser (λ = 632.8 nm) pointed at the WFS,
a polarizer for intensity regulation, and a spatial filter to flatten
the wavefront. A cylindrical lens with a focal length of 30 mm
is used to expand the beam in one dimension and introduce a
controlled aberration at an adjustable distance to the sensor.
Rotation of the lens by 90◦ around the optical axis of the system
enables a separate observation of the x and y displacements of
the spots, which are generated by the controlled wavefront aber-
ration caused by the cylindrical lens. Fig. 9 shows the subimage
of the sensor for 16 × 16 SEGs. The image without the inserted
lens [Fig. 9(a)] is used as reference for the slope calculation and
shows a regular spot grid with all focal points at the center of
the corresponding SEG. When placing a cylindrical lens with its
tangent plane aligned to the x-axis of the image sensor into the
Fig. 9. Recorded images of 16 × 16 subapertures on the CMOS sensor
for three differently aberrated wavefronts of the laser beam. (a) Plane, non- optical path, only spots along the x coordinate are affected by
aberrated wavefront. (b) Plane wavefront that is aberrated by a cylindrical lens the astigmatic aberration [Fig. 9(b)]. Fig. 9(c) shows the sensor
aligned to the x-axis of the sensor. (c) Plane wavefront that is aberrated by image after rotation of the cylindrical lens by 90◦ , resulting in
a cylindrical lens aligned to the y-axis of the sensor. The highlighted SEGs
clearly show the spreading of the spots along the alignment axis toward the rim a spreading of the spots along the y-axis, while the x-axis is
of the image. unaffected [compare to Fig. 9(a)].
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 7

Fig. 11. Reconstructed coefficients of Zernike modes j = (n(n + 2) + m)/2, from 1 to 14. The cylindrical lens clearly affects the x-astigmatism and the
defocus, where curvature x depicts an alignment of 0◦ and curvature y depicts a 90◦ rotation of the lens.

For the measurements, the wavefront created by the cylin-


drical lens is evaluated for a circular boundary, fitted into
28 × 28 SEGs. The slope calculation is executed on the FPGA,
and the results are internally transferred to the embedded pro-
cessor and sent to an external PC via Ethernet. A measurement
program written in LabView (National Instruments Corpora-
tion, Austin, TX) performs a wavefront reconstruction.
The wavefront reconstruction based on the zonal approach in
Southwell configuration [29] and processed in the host PC is
shown in Fig. 10. The phase distribution φ(x, y) (1) was recon-
structed, resulting in the OPD OPD(x, y) shown in Fig. 10.
The relation of the OPD and the phase distribution is given
by φ(x, y) = 2π · OPD(x, y)/λ, with λ being the observed
wavelength. The x- and y-axes in the figures have been normal-
ized by the lens pitch such that the scale matches the indices
of the corresponding SEGij . For some applications, the zonal
representation of OPDs at defined grid points might be advanta-
geous, particularly when looking at the square actuator arrange-
ment of commonly used DMs or localized measurement tasks.
As a second wavefront reconstruction method, a modal-based
approach processing weighting factors for Zernike polynomials
Znm (r, θ) is presented [26], where r and θ denote cylindrical
coordinates

Znm (r, θ) = n + 1Rn|m| (r)Θm (θ)
(n−|m|)
 2
(−1)s (n − s)!rn−2s
Rn|m| (r) =
s=0
s! [(n + m)/2 − s]! [(n − m)/2 − s]! Fig. 12. Reconstructed wavefronts based on the sum of weighted Zernike
polynomials for a cylindrical lens with rotations of (a) 0◦ and (b) 90◦ . The
⎧√ (black mesh) Southwell approach and the (color coded) Zernike approach lead
⎨ 2 cos (|m|θ) , m≥0 to the same result, as can clearly be seen.
Θm (θ) = 1, m=0 (7)
⎩√
2 sin (|m|θ) , m < 0. sponding Zernike modes of total 21 calculated modes for the
same cylindrical wavefront, excluding the unobservable piston
For a given radial degree n ≥ 0, the azimuthal frequency m mode. Curvature x and curvature y refer to the 0◦ and 90◦
goes from −n up to +n with a step of +2. rotation of the cylindrical lens with respect to the x-axis of the
As described in Section IV-A, with the proposed FPGA al- sensor, respectively. As expected, the dominant modes of the
gorithm, a maximum of 21 weighting factors can be processed. expanding cylindrical lens are reflected by the coefficients for
Fig. 11 shows 14 individual weighting factors and the corre- defocus and x-astigmatism. The rotation of the lens only affects
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT

the astigmatic mode while the defocus stays the same, confirm- versity of Technology, and T. Berndorfer, W. van Dyck, and
ing the unchanged distance between the sensor and the lens. The R. Smodic from Festo AG & Company for the support and
superposition of all Zernike polynomials is shown in Fig. 12. fruitful discussions.
A comparison between the wavefront based on the Southwell
approach (black mesh) and the wavefront based on the Zernike
polynomials (colored surface) shows that the amount of weight- R EFERENCES
ing factors is sufficient for representing the cylindrical wave- [1] P. Wizinowich, “Adaptive optics and Keck Observatory,” IEEE Instrum.
front created with this setup. In addition, the modal coefficients Meas. Mag., vol. 8, no. 2, pp. 12–19, Jun. 2005.
[2] D. Neal, D. Pierson, T. O’Hern, J. Torczynski, M. E. Warren, R. Shul, and
are also advantageous when combining individual wavefront T. S. McKechnie, “Wavefront sensors for optical diagnostics in fluid me-
correction elements, with each correcting a specific aberration chanics: Application to heated flow, turbulence and droplet evaporation,”
mode. Examples are correction collars to compensate for spher- Proc. SPIE, vol. 2005, no. 1, pp. 194–203, Dec. 1993.
[3] P. Soliz, S. Nemeth, G. Erry, L. Otten, and S. Yang, “Perceived image
ical aberrations used in high-numerical-aperture microscopy quality improvements from the application of image deconvolution to
[31], [32] or a steering mirror to compensate for tip and tilt retinal images from an adaptive optics fundus imager,” in Adaptive Optics
used in AO systems to reduce the stroke of DM actuators [25]. for Industry and Medicine, vol. 102, U. Wittrock, Ed. Berlin Heidelberg,
Germany: Springer-Verlag, 2005, ser. Springer Proceedings in Physics,
The results of our prototype Shack–Hartmann WFS show pp. 343–352.
the versatility of industrial smart cameras and their potential as [4] R. K. Tyson, Principles of Adaptive Optics, 3rd ed. Boca Raton, FL:
robust and easy (re-)configurable WFS, which can be used for CRC Press, 2011.
[5] Z. Chen and S. Fu, “Optical wavefront distortion due to supersonic flow
a wide field of measurement applications. The proposed algo- fields,” Chin. Sci. Bull., vol. 54, no. 4, pp. 623–627, Feb. 2009.
rithm processes 21 elements of a result vector, such as weight- [6] M. J. Cyca, S. A. Spiewak, and R. J. Hugo, “Non-invasive mapping of
ing factors for Zernike polynomials, and only adds a latency fluid temperature and flow in microsystems,” in Proc. ICMENS, 2005,
pp. 21–26.
time of 740 ns to the image readout which is independent of the [7] C. Li, G. Hall, B. Aldalali, D. Zhu, K. Eliceiri, and H. Jiang, “Surface
size of the region of interest, suitable for real-time AO systems. profiling and characterization of microlenses utilizing a Shack–Hartmann
wavefront sensor,” in Proc. Int. Conf. OMN, 2011, pp. 185–186.
[8] S. Campbell, S. M. F. Triphan, R. El-Agmy, A. H. Greenaway, and
V. C ONCLUSION D. T. Reid, “Direct optimization of femtosecond laser ablation using
adaptive wavefront shaping,” J. Opt. A, Pure Appl. Opt., vol. 9, no. 11,
In this paper, we have presented a low-latency wavefront pp. 1100–1104, Nov. 2007.
[9] F. Bortoletto, “The TNG commissioning [Telescopia Nazionale Galileo],”
reconstruction algorithm targeting a low-cost FPGA and the in Proc. IEEE 16th IMTC, 1999, vol. 2, pp. 627–632.
experimental results of a fast Shack-Hartmann WFS based on [10] R. Ragazzoni, “Adaptive optics projects,” in Proc. IEEE 16th IMTC, 1999,
an industrial smart camera. The FPGA implementation cal- vol. 2, pp. 1112–1116.
[11] J. Porter, H. M. Queener, J. E. Lin, K. Thorn, and A. Abdul, Adaptive
culates the slopes based on a center-of-gravity spot detection Optics for Vision Science. Hoboken, NJ: Wiley, 2006.
as the basis for a VMM algorithm in order to reconstruct the [12] B. Mikulec, J. Vallerga, J. McPhate, A. Tremsin, O. Siegmund, and
wavefront. The wavefront reconstruction algorithm based on a A. Clark, “A high resolution, high frame rate detector based on a mi-
crochannel plate readout with the Medipix2 counting CMOS pixel chip,”
VMM algorithm adds only 740 ns to the image sensor readout IEEE Trans. Nucl. Sci., vol. 52, no. 4, pp. 1021–1026, Aug. 2005.
while calculating a total number of 21 Zernike coefficients or [13] F. Zappa, S. Tisa, S. Cova, P. Maccagnani, D. Calia, R. Saletti,
21 elements of a result vector in general at a field of view up to R. Roncella, G. Bonanno, and M. Belluso, “Single-photon avalanche
diode arrays for fast transients and adaptive optics,” IEEE Trans. Instrum.
30 × 30 SEGs, independent of the adjustable sensing area. The Meas., vol. 55, no. 1, pp. 365–374, Feb. 2006.
algorithm features runtime parameterization of the region of [14] N. Hubin, B. L. Ellerbroek, R. Arsenault, R. M. Clare, R. Dekany,
interest, lens-geometry factors, calibration data, and individual L. Gilles, M. Kasper, G. Herriot, M. Le Louarn, E. Marchetti, S. Oberti,
J. Stoesz, J. P. Veran, and C. Vérinaud, “Adaptive optics for
thresholds. The prototype WFS has been tested successfully in extremely large telescopes,” in Proc. Int. Astronom. Union, 2006, vol. 232,
a laboratory prototype, using a defined wavefront as input for pp. 60–85.
testing the zonal and modal reconstruction approaches. Next [15] B. C. Platt and R. Shack, “History and principles of Shack–Hartmann
wavefront sensing,” J. Refract. Surg., vol. 17, no. 5, pp. S573–S577,
work will be focused on increasing the number of computable Sep./Oct. 2001.
coefficients by systematic investigation of the necessary bit [16] G. J. Hovey, R. Conan, F. Gamache, G. Herriot, Z. Ljusic, D. Quinn,
width of the fixed point numbers while maintaining a precise M. Smith, J. P. Veran, and H. Zhang, “An FPGA based computing plat-
form for adaptive optics control,” in Proc. 1st Conf.—AO4ELT, Y. Clenet,
wavefront result. J. Conan, T. Fusco, and G. Rousset, Eds., 2009, pp. 1–6.
Future work will be directed toward the implementation of a [17] K. Kepa, D. Coburn, J. C. Dainty, and F. Morgan, “High speed optical
full AO system on the smart camera, including the WFS as well wavefront sensing with low cost FPGAs,” Meas. Sci. Rev., vol. 8, no. 4,
pp. 87–93, Jan. 2008.
as the control system to control a DM. An approach based on [18] A. Basden, D. Geng, R. Myers, and E. Younger, “Durham adaptive op-
the vector–matrix multiplication of an influence matrix could tics real-time controller,” Appl. Opt., vol. 49, no. 32, pp. 6354–6363,
directly process signals for the actuator space of a DM. It is Nov. 2010.
[19] C. D. Saunter, G. D. Love, M. Johns, and J. Holmes, “FPGA technology
expected that this will considerably reduce the delay between for high-speed low-cost adaptive optics,” in Proc. SPIE, 2005, vol. 6018,
wavefront sensing and aberration correction, which will im- pp. 429–435.
prove the bandwidth of AO systems based on a smart camera. [20] S. Lynch, D. Coburn, F. Morgan, and C. Dainty, “FPGA based adaptive
optics control system,” in Proc. IET ISSC, 2008, pp. 192–197.
[21] L. Rodriguez-Ramos, A. Alonso, F. Gago, J. Gigante, G. Herrera, and
T. Viera, “Adaptive optics real-time control using FPGA,” in Proc. Int.
ACKNOWLEDGMENT Conf. FPL Appl., 2006, pp. 1–6.
[22] R. Paris, M. Thier, T. Thurnery, and G. Schitter, “Shack Hartmann wave-
The authors would like to thank H. W. Yoo from Delft front sensor based on an industrial smart camera,” in Proc. IEEE I2MTC,
University of Technology, P. Chang from the Vienna Uni- 2012, pp. 1127–1132.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

THIER et al.: LOW-LATENCY SHACK–HARTMANN WAVEFRONT SENSOR BASED ON AN INDUSTRIAL SMART CAMERA 9

[23] J. W. Goodman, Introduction to Fourier Optics, 3rd ed. Greenwood Rene Paris (S’12) received his M.S. degree in au-
Village, CO: Roberts Company Pub., 2005. tomation from the Vienna University of Technology
[24] Y. Dai, F. Li, X. Cheng, Z. Jiang, and S. Gong, “Analysis on in 2010. In addition to his studies, he worked from
Shack–Hartmann wave-front sensor with Fourier optics,” Opt. Laser 2008 to 2010 within the field of image processing
Technol., vol. 39, no. 7, pp. 1374–1379, Oct. 2007. and automation.
[25] P.-Y. Madec, “Control techniques,” in Control Techniques Adaptive Since October 2010, he is research assistant and
Optics in Astronomy. Cambridge, U.K.: Cambridge Univ. Press, 1999, performs research towards his Ph.D. at the Au-
pp. 131–154. tomation and Control Institute (ACIN), Faculty of
[26] G. M. Dai, Wavefront Optics for Vision Correction. Bellingham, WA: Electrical Engineering and Information Technology,
SPIE, 2008. Vienna University of Technology, where his research
[27] L. A. Poyneer, D. T. Gavel, and J. M. Brase, “Fast wave-front reconstruc- interests are focused on optical measurement and
tion in large adaptive optics systems with use of the Fourier transform,” J. smart camera based measurement systems.
Opt. Soc. Amer. A, Opt., Image Sci., Vis., vol. 19, no. 10, pp. 2100–2111,
Oct. 2002.
[28] D. L. Fried, “Least-square fitting a wave-front distortion estimate to an Thomas Thurner (M’02) completed his studies
array of phase-difference measurements,” J. Opt. Soc. Amer., vol. 67, in electrical engineering at the Graz University of
no. 3, pp. 370–375, Mar. 1977. Technology, with a focus on measuring and control
[29] W. H. Southwell, “Wave-front estimation from wave-front slope measure- engineering in 1999, and his doctorate in techni-
ments,” J. Opt. Soc. Amer., vol. 70, no. 8, pp. 998–1006, Aug. 1980. cal sciences at the Graz University of Technology
[30] W. van Dyck, R. Smodic, H. Hufnagl, and T. Berndorfer, “High-speed in 2004.
JPEG coder implementation for a smart camera,” J. Real-Time Image From 2000 to 2008 he worked as an assistant
Process., vol. 1, no. 1, pp. 63–68, Oct. 2006. professor at the Institute of Electrical Measurement
[31] M. Schwertner, M. J. Booth, and T. Wilson, “Simple optimization pro- and Measurement Signal Processing at the Graz Uni-
cedure for objective lens correction collar setting,” J. Microsc., vol. 217, versity of Technology. Since June 2008 he is heading
no. 3, pp. 184–187, Mar. 2005. the fatigue testing facility at the Graz University of
[32] H. W. Yoo, M. Verhaegen, M. E. van Royen, and G. Schitter, “Automated Technology.
adjustment of aberration correction in scanning confocal microscopy,” in
Proc. IEEE I2MTC, 2012, pp. 1083–1088.
Georg Schitter (SM’11) is Professor for Industrial
Automation at the Automation and Control Institute
(ACIN) in the Faculty of Electrical Engineering and
Markus Thier received his M.S. degree in automa- Information Technology of the Vienna University
tion from the Vienna University of Technology in
of Technology. His primary research interests are
2012. Since June 2012, he is research assistant and
on high-performance mechatronic systems and mul-
is working towards his Ph.D. at the Automation
tidisciplinary systems integration, particularly for
and Control Institute (ACIN), Faculty of Electrical
precision engineering applications in the high-tech
Engineering and Information Technology, Vienna
industry, scientific instrumentation, and mechatronic
University of Technology.
imaging systems that require precise positioning
His research fields are field-programmable gate combined with high bandwidths, such as scanning
array based real time measurement systems and
probe microscopy, adaptive optics, and lithography systems for semiconductor
adaptive optics.
industry.

View publication stats

You might also like