QNat AnalyticalSolutions
QNat AnalyticalSolutions
Abstract
This document presents the analytical solutions developed by the QNat group for the
hackathon held during the Second Quantum Computing School organized by ICTP-
SAIFR.
Contents
1
1. Bose-Einstein Condensates and the Involvement in
Advances for New Technologies
Exercise 1: Interactions between atoms and the low-energy limit
In cold atomic clouds, it is possible to have particle separations which are one order
of magnitude larger than the length scale associated with atom-atom interactions. Con-
sequently, two-body interactions are much more relevant than higher-body interactions.
Moreover, the very low temperatures (and other relevant energy scales) achieved in these
systems justify employing low-energy scattering theory.
In this first exercise, we will focus on two particles of mass m interacting via a two-
body potential U (r) of finite range R. Their energy in the center-of-mass frame is given
by
ℏ2 k 2 ℏ2 k 2
E= = ,
2mr m
where mr is the reduced mass. We may compute the angular momentum of the pair,
and associate a quantum number ℓ to it.
(a) What is the low-energy limit in terms of k and R? Why is the ℓ = 0 component the
most relevant one in low-energy scattering theory (the so-called s-wave scattering)?
(b) What is the s-wave scattering length? Explain in one or two sentences!
(c) To first order in the interaction (Born approximation), the scattering length is
given by:
mr Z 3
a= d r U (r).
2πℏ2
This is a very interesting result because it shows us that potentials with different
shapes produce the same scattering length as long as their volumetric integral is the
same. Use this result to create a two-body potential with scattering length a of the
form:
Ueff (r, r′ ) = U0 δ(r − r′ ),
where the two particles are situated at r and r′ , and U0 is a constant you have to calculate.
The potential in Eq. (7) is called a contact interaction since the potential is non-zero
only if the two particles are at the same position. It presents a much simpler alternative
to the true interatomic potentials (which can be quite complicated) and retains the
relevant physical information.
Solution:
Author: Moisés da Rocha Alves
2
(a) The energy of the particles is related to the wave number k by the equation:
ℏ2 k 2 ℏ2 k 2
E= = , (1.1)
2mr m
which can also be written as:
ℏ2
E= , (1.2)
mλ̄2
where λ̄ = λ/2π is the reduced de Broglie wavelength.
In the low-energy scattering regime, we consider solutions where the reduced wave-
length is much larger than the range of the potential. Thus, we are interested in
situations where λ̄ ≫ R, which is equivalent to the condition kR ≪ 1, noting that
λ̄ = 1/k. Therefore, the low-energy limit corresponds to kR ≪ 1.
Now, the reason why the ℓ = 0 component is the most relevant in low-energy scattering
theory (k → 0) is primarily because of the centrifugal barrier. The effective potential
for a partial wave with angular momentum ℓ in the context of scattering theory is given
by:
ℏ2 ℓ(ℓ + 1)
Veff (r) = V (r) + . (1.3)
2mr2
If the energy is close to zero, the second term of this potential creates a centrifugal
barrier that prevents the wave function from penetrating the scattering region unless
ℓ = 0. As a result, in the low-energy regime, only the term with ℓ = 0 becomes
relevant, making the contributions from the partial waves with ℓ > 0 negligible.
(b) The s-wave scattering length a can be interpreted as the range in which the interac-
tion between the particles becomes significant. It can be mathematically defined as
limk→0 k cot δ0 (k) = − a1 in the context of low-energy scattering, and can also be an
indication of whether the interaction is repulsive a < 0 or attractive a > 0.
(c) We can relate the two-body potential to the scattering length through the following
derivation. Using the given two-body potential, we obtain the result:
mr Z 3
a= d r U0 δ(r − r′ ). (1.4)
2πℏ2
Now, noticing that the Dirac delta is constrained to satisfy the identity:
Z
d3 r δ(r − r′ ) = 1, (1.5)
3
which gives us the desired result for the two-body potential:
2πℏ2 4πℏ2
Ueff (r, r′ ) = a δ(r − r′ ) = a δ(r − r′ ). (1.8)
mr m
i=1 2m i<j
where V (ri ) is an external (one-body) potential, and we have made use of the result
from exercise 1(c), Eq. (7), to write the interparticle interactions.
We now adopt a mean-field (or Hartree) approach and assume that the wave function
is a symmetrized product of single-particle wave functions. In the fully condensed state,
all particles are in the same single-particle state ϕ(r); hence the many-body wave function
is given by:
N
Ψ(r1 , r2 , . . . , rN ) = ϕ(ri ).
Y
i=1
(a) Deduce the expression for the expectation value of the Hamiltonian of Eq. (8) in
the state of Eq. (9).
(b) Alternatively, we can introduce the concept of the wave function of the conden-
sate state, ψ(r) = N 1/2 ϕ(r). Neglecting terms of order 1/N , the solutions of (a)
becomes:
1
" #
Z
ℏ2
E(ψ) = d3 r |∇ψ(r)|2 + V (r)|ψ(r)|2 + U0 |ψ(r)|4 .
2m 2
4
To find the optimal ψ(r), we have toR minimize Eq. (10) with the constraint of
a constant number of particles N = d3 r|ψ(r)|2 . This can be done conveniently
using Lagrange multipliers, δE − µδN = 0, where µ is the chemical potential.
Use this procedure and consider variations with respect to ψ ∗ (r) to obtain the
time-independent GPE.
You will see that it resembles the Schrödinger equation, but a non-linear term takes
into account the mean field produced by the other particles. Another difference is
that the eigenvalue is the chemical potential, not the energy per particle, as it is
for the usual Schrödinger equation. The chemical potential is equal to the energy
per particle for non-interacting particles (all in the same state), but for interacting
particles, it is not.
Solution:
Authors: Paulo Vitor de Queiroz Ferreira and Moisés da Rocha Alves
(a) We want to deduce the expectation value of the Hamiltonian given by the exercise,
thus
⟨H⟩ = ⟨H0 ⟩ + ⟨U ⟩, (1.9)
where ⟨H0 ⟩ and ⟨U ⟩ are given by:
*N " #+
p2i
⟨H0 ⟩ = + V (ri )
X
,
i=1 2m
* +
⟨U ⟩ = U0 δ(ri − rj ) .
X
i<j
First, we calculate the expectation value of ⟨H0 ⟩. Using the mean-field approximation,
we can express the expectation value in terms of the expectation value of just one
particle, multiplied by the N particles that reside in the same state. Consequently, the
expectation value of the Hamiltonian ⟨H0 ⟩ can be expressed as follows:
N
" #
Z
p2i
⟨H0 ⟩ = Ψ (r1 , r2 , . . . , rN )
∗
+ V (ri ) Ψ(r1 , r2 , . . . , rN )d3 r1 d3 r2 . . . d3 rN ,
X
i=1 2m
(1.10)
which simplifies to:
−N ℏ2 Z ∗ Z
⟨H0 ⟩ = ϕ (r)∇2 ϕ(r) d3 r + N ϕ∗ (r)V (r)ϕ(r) d3 r. (1.11)
2m
5
We are first interested in solving the integral on the left side of the above equation. To
this end, we utilize the following property:
By integrating over the entire space on both sides of the above equation, we obtain:
Z Z Z
− ϕ (r)∇ ϕ(r)d r =
∗ 2 3
∇ϕ (r) · ∇ϕ(r)d r −
∗ 3
∇ · (ϕ∗ (r)∇ϕ(r))d3 r. (1.13)
By imposing a localization condition for the function ϕ(r), i.e., it decays rapidly at
infinity, the second term on the right-hand side of the equation vanishes and we get
Z Z
− ϕ∗ (r)∇2 ϕ(r)d3 r = |∇ϕ(r)|2 d3 r, (1.16)
N ℏ2 Z Z
⟨H0 ⟩ = |∇ϕ(r)| d r + N V (r)|ϕ(r)|2 d3 r.
2 3
(1.17)
2m
i<j
N Z
(1.18)
= U0 Ψ∗ (r1 , r2 , . . . , rN )δ(ri − rj )Ψ(r1 , r2 , . . . , rN )d3 r1 . . . d3 rN .
X
i<j
i<j k=1
N Z
⟨U ⟩ = U0 d3 ri d3 rj |ϕ(ri )|2 |ϕ(rj )|2 δ(ri − rj ). (1.20)
X
i<j
6
Using the filtering property of the Dirac delta function,
N Z
⟨U ⟩ = U0 d3 ri |ϕ(ri )|4 . (1.21)
X
i<j
Since the summation gives N2 terms, we use the binomial coefficient to compute the
combinations of pairs in the summation, such that:
N! N (N − 1)(N − 2)! N (N − 1)
!
N
= = = . (1.22)
2 2!(N − 2)! 2(N − 2)! 2
N (N − 1) Z 3
⟨U ⟩ = U0 d r|ϕ(r)|4 . (1.23)
2
Thus, the expected value of the Hamiltonian has the following form:
N ℏ2 Z Z
N (N − 1) Z 3
⟨H⟩ = |∇ϕ(r)|2 d3 r + N V (r)|ϕ(r)|2 d3 r + U0 d r|ϕ(r)|4 . (1.24)
2m 2
If we use the wave function of the condensate state, ψ(r) = N 1/2 ϕ(r) and neglect the
term of order 1/N , the solution becomes the equation given in (b),
1
" #
Z
ℏ2
⟨H⟩ = 3
dr |∇ψ(r)|2 + V (r)|ψ(r)|2 + U0 |ψ(r)|4 . (1.25)
2m 2
(b) We aim to minimize the function E(ψ), which depends on the wave function ψ(r),
while imposing the constraint on the number of particles N = d3 r|ψ(r)|2 , which
R
must remain constant. To do this, we will use the method of Lagrange multipliers,
δE − µδN = 0, (1.26)
where µ is the Lagrange multiplier, which in this context corresponds to the chemical
potential.
Now we create a new energy function that takes into account the constraint on the
number of particles:
Z
f (ψ) = E(ψ) − µ d3 r|ψ(r)|2 − N , (1.27)
7
We can ignore the term +µN , since this term in the new function f (ψ) will not affect
the result when we perform the minimization. By minimizing the new function f (ψ),
we will not only be minimizing the energy E(ψ), but also ensuring that the wave
function ψ(r) remains normalized with the particle number N .
The next step for our solution is to perform the minimization, which consists of deriving
the function f (ψ) with respect to the term we want to minimize and setting it to zero.
In this case, the term we are going to vary is ψ ∗ (r).
∂f (ψ)
=0 (1.29)
∂ψ ∗ (r)
To make the solution clearer throughout the calculations, we will take each part of the
function f (ψ) individually.
2
Starting with the kinetic part of our function, K = d3 r 2m |∇ψ(r)|2 :
R
ℏ
(Z )
2
∂K ∂ 3 ′ ℏ
= dr ∇ψ (r ) · ∇ψ(r ) .
∗ ′ ′
(1.30)
∂ψ ∗ (r) ∂ψ ∗ (r) 2m
We will use the result of equation Eq. (1.16) to simplify our calculations. Since it is
the same wave function, it must satisfy the same boundary conditions.
( )
2
∂K ∂ Z
3 ′ ℏ
= − d r ψ ∗ (r′ )∇2 ψ(r′ )
∂ψ ∗ (r) ∂ψ ∗ (r) 2m
(1.31)
(r
∗ ′
)
( Z !)
2
ℏ ∂ψ ∂
= − d3 r ′ ∇2 ψ(r′ ) + ψ ∗ (r′ ) ∗ ∇2 ψ(r′ ) .
2m ∂ψ ∗ (r) ∂ψ (r)
The second term that arises from the product rule is zero, since the function does not
∗ (r′ )
depend on ψ ∗ (r). The term ∂ψ ∂ψ ∗ (r)
= δ(r′ − r) arises because we are differentiating
with respect to ψ ∗ (r), and we need the evaluated point r′ to coincide with r.
2
∂K Z
3 ′ ℏ
= − d r δ(r′ − r)∇2 ψ(r′ ). (1.32)
∂ψ ∗ (r) 2m
Considering the delta function’s filtering property, we finally obtain the kinetic part of
our equation:
∂K ℏ2 2
= − ∇ ψ(r). (1.33)
∂ψ ∗ (r) 2m
8
∂P ∂
Z
= d r V (r )ψ (r )ψ(r )
3 ′ ′ ∗ ′ ′
∂ψ (r)
∗ ∂ψ ∗ (r)
∂ψ ∗ (r′ ) ∗ ′ ∂ψ(r )
′
Z " #
= d r V (r )
3 ′ ′
ψ(r′
) + ψ (r )
∂ψ ∗ (r) ∂ψ ∗ (r) (1.34)
Z
= d3 r′ V (r′ )δ(r′ − r)ψ(r′ )
= V (r)ψ(r).
The penultimate term from Eq. (1.28) corresponds to the interaction potential, I =
d3 r 12 U0 |ψ(r)|4 , and we want to find ∂ψ∂I
∗ (r) . Therefore,
R
∂I ∂ 1
Z
2
= d3 r′ U0 (ψ ∗ (r′ )ψ(r′ ))
∂ψ (r)
∗ ∂ψ (r)
∗ 2
3 ′1 ∗ ′ ∂ψ (r ) ∗ ′ 2 ∂ψ(r )
∗ ′ ′ 2
Z !
= d r U0 2ψ (r ) ∗ ψ(r ) + ψ (r ) ′ 2
2 ∂ψ (r) ∂ψ ∗ (r)
Z
1
(1.35)
= d3 r′ U0 2ψ ∗ (r′ )δ(r′ − r)ψ(r′ )2
2
1
= U0 2ψ ∗ (r)ψ(r)2
2
= U0 |ψ(r)|2 ψ(r).
Lastly, for the term containing the constraint, Λ = −µ|ψ(r)|2 , we need to find ∂Λ
∂ψ ∗ (r)
=
∂Λ
∂ψ ∗ (r)
, giving us:
∂Λ ∂
Z
= d3 r′ µψ ∗ (r′ )ψ(r′ )
∂ψ ∗ (r) ∂ψ ∗ (r)
∂ψ ∗ (r′ ) ∗ ′ ∂ψ(r )
′
(1.36)
Z " #
= d rµ 3 ′
ψ(r′
) + ψ (r )
∂ψ ∗ (r) ∂ψ ∗ (r)
= µψ(r).
9
(c) A uniform Bose gas refers to a system where the gas is uniformly distributed throughout
space and it has no external potential acting on it. The Hamiltonian for such a system
corresponds to the one studied in the previous sections, but now disregarding the
external potential term. This results in a simplified Hamiltonian as follows:
N
ℏ2 2
H= ∇i + U0 δ(ri − rj ). (1.39)
X X
−
i=1 2m i<j
We will proceed with a similar treatment to what was done in the previous sections.
First, we are interested in the expectation value of this Hamiltonian:
* +
ℏ2 2
⟨H⟩ = − ∇i + U0 δ(ri − rj ) . (1.40)
X
2m i<j
If we compare this to the result from section (a), we can simply ignore the external
potential term, and we will have the corresponding expectation value, which is given
by:
1
" #
Z
ℏ2
⟨H⟩ = d r 3
|∇ψ(r, t)| + U0 |ψ(r, t)| .
2 4
(1.41)
2m 2
From this, we can derive the GPE using the variational principle, as was done in section
(b). Let’s take a step back and look at the action. The action is the fundamental object
in the variational principle and is given by the integral of the Lagrangian over time:
Z
S(ψ, ψ ) = ∗
dtL(ψ, ψ ∗ ), (1.42)
where,
1
" #
ℏ2
H(ψ, ψ ) =
∗
|∇ψ(r, t)|2 + U0 |ψ(r, t)|4 . (1.44)
2m 2
Now, we apply the derivative of the action with respect to ψ ∗ (r, t), such that:
δS(ψ, ψ ∗ )
= 0. (1.45)
δψ ∗ (r, t)
We only need to calculate the variation in the first term of the action, as the other
derivatives have already been obtained in section (b),
!
∂ ∂ψ(r, t) ∂ψ(r, t)
iℏψ ∗ (r, t) = iℏ . (1.46)
∂ψ (r, t)
∗ ∂t ∂t
10
Combining it with the results obtained in Eq. (1.33) and Eq. (1.35), we can write the
action minimization as:
δS(ψ, ψ ∗ )
= 0,
δψ ∗ (r, t)
!
∂ψ(r, t) ℏ2 2
iℏ + ∇ − U0 |ψ(r, t)|2 ψ(r, t) = 0, (1.47)
∂t 2m
!
∂ψ(r, t) ℏ2 2
iℏ =− ∇ + U0 |ψ(r, t)| ψ(r, t),
2
∂t 2m
such that this corresponds to our GPE for a uniform Bose gas. We can consider a
stationary solution, assuming a wave function of the form:
In this case, µ corresponds to the chemical potential, which indicates the conservation of
the number of particles. Applying this wave function directly into equation Eq. (1.47),
we obtain: !
ℏ2 2
µψ0 (r) = ∇ + U0 |ψ0 (r)| ψ0 (r).
2
(1.49)
2m
Considering a uniform Bose gas, the condensate density is the same everywhere, and
therefore ∇2 ψ0 (r) = 0. This simplifies the time-independent GPE to:
This tells us that the chemical potential in the case of a uniform condensate is propor-
tional to the condensate density |ψ0 (r)|2 .
11
2. Prospects and Challenges for Quantum Machine Learn-
ing
Exercise 1
Let V = C2 be the Hilbert space of a single qubit. Then, consider the set of objects
{1, X}, where 1 is the 2 × 2 identity matrix and X the Pauli-x matrix. Show that these
objects, which represent bit-flips, form a group.
Solution:
Authors: Tailan Santos Sarubi and Moisés da Rocha Alves
In this exercise, we need to prove that {1, X} forms a group under matrix multiplication.
Before getting into our solution, we need to state what a group is.
Definition 2.1. A group is a non-empty set G equipped with a binary operation ∗ : G×G → G
satisfying the following axioms:
• (i) Closure: if a, b ∈ G, then a ∗ b ∈ G.
• (ii) Associativity: a ∗ (b ∗ c) = (a ∗ b) ∗ c for all a, b, c ∈ G.
• (iii) Identity: there is an element e ∈ G, such that a ∗ e = e ∗ a = a for all a ∈ G.
• (iv) Inverse: for each element a ∈ G, there is an element b ∈ G such that a∗b = e = b∗a.
First, we will prove that {1, X} satisfies the closure axiom. One can verify that the
following expressions hold:
1 0 1 0 1 0 0 1 1 0 0 1
! ! ! ! ! !
11 = = = 1 (2.1) X1 = = = X (2.3)
0 1 0 1 0 1 1 0 0 1 1 0
1 0 0 1 0 1 0 1 0 1 1 0
! ! ! ! ! !
1X = = = X (2.2) XX = = = 1 (2.4)
0 1 1 0 1 0 1 0 1 0 0 1
12
1(11) = (11)1 = 1 (2.5) 1(XX) = (1X)X = 1 (2.9)
Exercise 2
Prove that the set of all unitaries of the form U = e−iϕ3 Y e−iϕ2 X e−iϕ1 Y constitutes a
representation of the unitary Lie group SU (2).
Solution:
Author: Tailan Santos Sarubi.
The unitary Lie group SU (2) consists of all unitary 2 × 2 matrices with determinant 1.
First of all, let’s prove that U is unitary and has determinant equals to 1.
A matrix is unitary if and only if U † U = 1, therefore:
as we wanted.
The next step is to show that U can represent all matrices of SU (2). A generic element
of SU (2) can be expressed as:
!
a −b∗
U= , with |a|2 + |b|2 = 1, (2.13)
b a∗
13
where a and b are complex numbers. This condition ensures that U is unitary and has
determinant 1.
For the element U given in the statement, X and Y are the generators of the Lie algebra
of SU (2). These generators represent the rotation operators around the x- and y-axes.
Explicitly, we have:
0 1 0 −i
! !
X= , Y = .
1 0 i 0
These operators satisfy the commutation relations of the Lie algebra of SU (2):
[X, Y ] = 2iZ, [Y, Z] = 2iX, [Z, X] = 2iY,
where Z is the generator around the z-axis, given by:
1 0
!
Z= .
0 −1
We can interpret the expression U = e−iϕ3 Y e−iϕ2 X e−iϕ1 Y as a sequence of rotations. These
are exactly the Euler rotations used to parametrize elements of the group SU (2). Therefore,
the given expression for U is consistent with the expected structure for elements of SU (2):
U = Ry (2Φ3 )Rx (2Φ2 )Ry (2Φ1 ).
The rotation matrices associated with the generators X and Y are:
cos ϕ2 − sin ϕ2
! !
ϕ ϕ
e−iϕY = cos 1 − i sin Y = ,
2 2
sin ϕ2 cos ϕ2
and
cos ϕ
−i sin ϕ
! !
ϕ ϕ
e −iϕX
= cos 1 − i sin X= 2 2 .
2 2
−i sin ϕ
2
cos ϕ
2
We will now multiply these matrices in the order e−iϕ3 Y e−iϕ2 X e−iϕ1 Y . Multiplying e−iϕ2 X
and e−iϕ1 Y :
cos
ϕ2
−i sin ϕ2 ϕ1
cos − sin ϕ1
e−iϕ2 X e−iϕ1 Y = 2 2 2 2
−i sin ϕ22 cos ϕ2
2
sin ϕ21 cos ϕ21
Performing the multiplication gives:
cos ϕ1
cos ϕ22 − i sin ϕ1
sin ϕ22 − sin ϕ1
cos ϕ22 − i cos ϕ1
sin ϕ2
e−iϕ2 X e−iϕ1 Y = 2 2 2 2 2 .
sin ϕ21 cos ϕ22 − i cos ϕ21 sin ϕ22 cos ϕ21 cos ϕ22 + i sin ϕ1
2
sin ϕ2
2
Now, if we multiply this result for e−iϕ3 Y , we will get as a final result, after some simplifica-
tion:
−i sin ϕ2
sin ϕ1 +ϕ 3
+ cos ϕ22 cos ϕ1 +ϕ 3
−i sin ϕ2
cos ϕ1 +ϕ 3
− sin ϕ1 +ϕ3
cos ϕ22
U= 2 2 2 2 2 2 ,
−i sin ϕ22 cos ϕ1 +ϕ
2
3
+ sin ϕ1 +ϕ
2
3
cos ϕ22 i sin ϕ22 sin ϕ1 +ϕ
2
3
+ cos ϕ22 cos ϕ1 +ϕ
2
3
14
where we can identify in Eq. (2.13):
ϕ1 + ϕ3 ϕ1 + ϕ3
! ! ! !
ϕ2 ϕ2
a = −i sin sin + cos cos , (2.14)
2 2 2 2
and
ϕ1 + ϕ3 ϕ1 + ϕ3
! ! ! !
ϕ2 ϕ2
b = −i sin cos + sin cos , (2.15)
2 2 2 2
which are parametrized by (ϕ1 , ϕ2 , ϕ3 ), and thus representing all SU (2) matrices.
Challenge
Solution:
Author: Moisés da Rocha Alves
First of all, we need to describe the exact definition of the commutant C(k)(G), as stated
in the lecture notes.
Definition 2.2. Given some representation R of G, we define its k-th order commutant, de-
noted as C (k) (G), to be the vector subspace of the space of bounded linear operators on H⊗k
that commutes with R(g)⊗k for all g in G. That is:
As already stated in Eq. 9 of the lecture notes, notice that if R(g)⊗k is a completely
reducible representation, then it can be described as having a block diagonal structure of
irreducible representations as:
where λ labels the irreducible representations, and mλ is the multiplicity factor (which indi-
cates how many times the irreducible representations appear in the block diagonal structure).
As also stated in Eq. 17 of the lecture notes, the operator A, described in Definition 2.2,
must be written as:
(2.18)
M
A≃ Aλ ⊗ 1dλ
λ
dim(Rλ ).
15
With all these definitions set, notice that the Hilbert space can then be described by
where Cmλ is the multiplicity space, and Cdλ is the space where the irreducible representation
Rλ acts.
Using Schur’s lemma, notice that A can only act in the multiplicity space in order to
commute with R(g)⊗k , as the latter has a block diagonal structure, and thus A must have the
structure of Eq. (2.18) as a direct consequence of Schur’s lemma. In this case, the dimension
of the commutant C (k) (G) is related to the degrees of freedom of all Aλ matrices that appear
in Eq. (2.18).
Since Aλ is an arbitrary mλ × mλ matrix and can act freely in the multiplicity space,
it will have m2λ entries. Thus, the dimension D of the commutant is simply the sum of all
contributions from the Aλ matrices, or
D= m2λ , (2.20)
X
as we wanted.
16
3. High Dimensional Quantum Communication with Struc-
tured Light
Exercise 1
Let HGθ (x, y) be the first order Hermite-Gauss mode rotated counter-clockwise by
θ. Show that:
• LG± (r, ϕ) = √1
2
[HG10 (x, y) ± iHG01 (x, y)]
Solution:
Author: Moisés da Rocha Alves
which makes them an orthonormal set of functions. This relation is guaranteed by the
orthonormality condition of the Hermite polynomials, and the normalization constant Amn
ensures that this condition is properly maintained.
Since we are dealing with first order Hermite-Gauss modes, we can describe them follow-
ing the definition of Eq. (3.1). Since H1 (x) = 2x and H0 (x) = 1, for the first-order mode in
the x-direction:
√
A10 2 2x x2 + y 2 x2 + y 2 −iϕN (z)
! !
HG10 (x, y) = · · exp − 2 exp ik e . (3.3)
w(z) w(z) w (z) 2R(z)
17
Following the same argument, for the first-order mode in the y-direction:
√
A01 2 2y x2 + y 2 x2 + y 2 −iϕN (z)
! !
HG01 (x, y) = · · exp − 2 exp ik e . (3.4)
w(z) w(z) w (z) 2R(z)
Now, let’s prove the first property stated in the exercise. Following the result of Eq. (3.3),
we want to rotate it counter-clockwise. Thus:
cos θ sin θ
! ! !
x′ x
HGθ (x, y) = HG10 (x , y ) with
′ ′
= , (3.5)
y′ − sin θ cos θ y
which gives us:
√
A10 2 2(x cos θ + y sin θ)
HGθ (x, y) =
w(z) w(z)
(x cos θ + y sin θ)2 + (−x sin θ + y cos θ)2
!
exp − (3.6)
w2 (z)
(x cos θ + y sin θ)2 + (−x sin θ + y cos θ)2 −iϕN (z)
!
exp ik e
2R(z)
that can be simplified to:
√
A10 2 2(x cos θ + y sin θ) x2 + y 2 x2 + y 2 −iϕN (z)
! !
HGθ (x, y) = · exp − 2 exp ik e . (3.7)
w(z) w(z) w (z) 2R(z)
Now, notice that both HG10 and HG01 should have the same normalization constant, since
they are both first-order modes, which means that A10 = A01 . This final step gives us the
desired result:
√
A10 2 2x cos θ x2 + y 2 x2 + y 2 −iϕN (z)
! !
HGθ (x, y) = · exp − 2 exp ik e
w(z) w(z) w (z) 2R(z)
√ (3.8)
A01 2 2y sin θ x2 + y 2 x2 + y 2 −iϕN (z)
! !
+ · exp − 2 exp ik e ,
w(z) w(z) w (z) 2R(z)
where we can recognize HG10 and HG01 from Eq. (3.3) and Eq. (3.4), thus giving:
HGθ (x, y) = cos θ HG10 (x, y) + sin θ HG01 (x, y). (3.9)
Now that we have proven the first identity, we can prove the second and third identities
using this result. Let’s first prove the second one. If we calculate the integral in the exercise
and use the first identity, we get:
Z Z
HG∗θ HGθ dx dy = [cos θ HG∗10 + sin θ HG∗01 ] [cos θ HG10 + sin θ HG01 ] dx dy
Z Z
= cos θ2
HG∗10 HG10 dxdy + cos θ sin θ HG∗10 HG01 dxdy (3.10)
Z Z
+ sin θ cos θ HG∗01 HG10 dxdy + sin2 θ HG∗01 HG01 dxdy.
18
Using Eq. (3.2), one can see that the second and third integrals vanish, and the first and
last integrals equal to one, implying that:
Z
HG∗θ (x, y) HGθ (x, y) dx dy = cos2 θ + sin2 θ = 1, (3.11)
as we wanted.
For the third identity, we can use the same approach. Notice that:
HGθ+π/2 (x, y) = cos(θ + π/2) HG10 (x, y) + sin(θ + π/2) HG01 (x, y), (3.12)
and since cos(θ + π/2) = − sin θ and sin(θ + π/2) = cos θ, this gives us
HGθ+π/2 (x, y) = − sin θ HG10 (x, y) + cos θ HG01 (x, y). (3.13)
To conclude, we want to prove the last property. For this purpose, we shall introduce
the Laguerre-Gaussian modes:
√ !|l|
2r 2r2
! ! !
Bpl r2 r2
LGpl (r, ϕ) = Lp|l|
exp − 2 exp ik e−iψN (z) eilϕ ,
w(z) w(z) w2 (z) w (z) 2R(z)
(3.16)
where again Bpl is a normalization constant, Lp is the generalized Laguerre polynomial, and
|l|
19
Now, to conclude our result, we need to know the relation between the two constants of
normalization. For instance, the constant Bpl can be written as:
2p!
s
Bpl = , (3.19)
π(p + |l|)!
q
so B0,±1 = 2/π. For the constant in Eq. (3.1), we have:
s
2
Amn = , (3.20)
π2m+n m!n!
q
which gives us A10 = A01 = 1/π. Using these two relations, gives us:
√ √
B0,±1 = 2A10 = 2A01 . (3.21)
√ √
We can now substitute these relations into Eq. (3.18), multiply it by 2/ 2, and get:
√
1 A10 2 2x x2 + y 2 x2 + y 2 −iψN (z)
" ! ! #
LG± (r, ϕ) = √ exp − 2 exp ik e
2 w(z) w(z) w (z) 2R(z)
√ (3.22)
A01 2 2y x2 + y 2 x2 + y 2 −iψN (z)
" ! ! #
±i exp − 2 exp ik e .
w(z) w(z) w (z) 2R(z)
From this expression, we can recognize HG10 and HG01 from Eq. (3.3) and Eq. (3.4), thus
giving:
1
LG± (r, ϕ) = √ [HG10 (x, y) ± iHG01 (x, y)] , (3.23)
2
which concludes our proof.
Exercise 2
Let unm (x, y) be a basis set of the square integrable functions in R2 . Show that:
∞
unm (x, y)u∗nm (x′ , y ′ ) = δ(x − x′ )δ(y − y ′ )
X
n,m=0
Solution:
Author: Moisés da Rocha Alves
n,m=0 n,m=0
∞
(3.24)
= ⟨x, y| |unm ⟩⟨unm | |x′ , y ′ ⟩.
X
n,m=0
20
Notice that the set of functions {|unm ⟩} does not necessarily form an orthonormal basis, but
since it spans the Hilbert space of square-integrable functions on R2 , it forms a complete
basis set, and thus obeys a completeness relation:
∞
|unm ⟩⟨unm | = 1. (3.25)
X
n,m=0
With that in mind, we can substitute this expression into Eq. (3.24) and get the result:
∞
unm (x, y)u∗nm (x′ , y ′ ) = ⟨x, y|x′ , y ′ ⟩, (3.26)
X
n,m=0
and since the position eigenstates {|x, y⟩} are orthonormal, we shall have ⟨x, y|x′ , y ′ ⟩ =
δ(x − x′ )δ(y − y ′ ), proving the desired result:
∞
unm (x, y)u∗nm (x′ , y ′ ) = δ(x − x′ )δ(y − y ′ ). (3.27)
X
n,m=0
Challenge
• Show that the vector structures used for alignment-free quantum communication,
• Show that the polarization Stokes parameters for these vector structures are all
equal to zero, if measured with large area detectors.
Solution:
Authors: Moisés da Rocha Alves and Paulo Vitor de Queiroz Ferreira
• Let’s begin by demonstrating the first part of the challenge. Firstly, we want to prove
that the first vector structure:
Ψθ (x, y) = HGθ (x, y)êθ + HGθ+π/2 (x, y)êθ+π/2 , (3.28)
is rotation invariant. The way we want to show it is by using Eq. (3.9) and Eq. (3.13),
as well as:
êθ+π/2 = cos(θ + π/2)êH + sin(θ + π/2)êV
(3.29)
= − sin θêH + cos θêV ,
21
and substitute directly in the expression for Ψθ (x, y) to show its rotation invariance.
With these relations in mind, we can write:
Ψθ (x, y) =[cos θ HG10 (x, y) + sin θ HG01 (x, y)](cos θêH + sin θêV )
(3.30)
+ [− sin θ HG10 (x, y) + cos θ HG01 (x, y)](− sin θêH + cos θêV ),
which results in:
Ψθ (x, y) =[HG10 (x, y)(cos2 θ + sin2 θ) + HG01 (x, y)(sin θ cos θ − cos θ sin θ)] êH
+ [HG01 (x, y)(cos2 θ + sin2 θ) + HG10 (x, y)(cos θ sin θ − sin θ cos θ)] êV ,
(3.31)
thus giving:
Ψθ (x, y) = HG10 (x, y)êH + HG01 (x, y)êV . (3.32)
As Ψθ (x, y) is no longer dependent on θ, this shows that regardless of the angle we
choose, its value remains unchanged, and therefore, we can say it is rotation invariant.
Now we want to do the same for the second expression:
Ψθ (x, y) = HGθ (x, y)êθ+π/2 + HGθ+π/2 (x, y)êθ . (3.33)
Let’s take the same approach:
Ψθ (x, y) =[cos θ HG10 (x, y) + sin θ HG01 (x, y)](− sin θêH + cos θêV )
(3.34)
+ [− sin θ HG10 (x, y) + cos θ HG01 (x, y)](cos θêH + sin θêV ),
which results in:
Ψθ (x, y) = êH [HG10 (x, y)(−2 sin θ cos θ) + HG01 (x, y)(cos2 θ − sin2 θ)]
(3.35)
+ êV [HG10 (x, y)(cos2 θ − sin2 θ) + HG01 (x, y)(2 sin θ cos θ)].
Now, we can use the fact that sin(2θ) = 2 sin(θ) cos(θ) and cos(2θ) = cos2 θ − sin2 θ.
Thus:
Ψθ (x, y) =[− sin(2θ)HG10 (x, y) + cos(2θ)HG01 (x, y)]êH
(3.36)
+ [cos(2θ)HG10 (x, y) + sin(2θ)HG01 (x, y)]êV .
Notice that we can regroup this expression in terms of HG10 (x, y) and HG01 (x, y):
Ψθ (x, y) =HG10 (x, y)[− sin(2θ)êH + cos(2θ)êV ]
(3.37)
+ HG01 (x, y)[cos(2θ)êH + sin(2θ)êV ],
where we can recognize ê2θ+π/2 = − sin(2θ)êH + cos(2θ)êV and ê2θ = cos(2θ)êH +
sin(2θ)êV and write:
Ψθ (x, y) = HG10 (x, y)ê2θ+π/2 + HG01 (x, y)ê2θ . (3.38)
Since ê2θ+π/2 and ê2θ always form an orthogonal basis regardless of the value of θ (i.e.,
ê2θ+π/2 · ê2θ = 0), the components of the vector Ψθ (x, y) do not change with θ. Thus,
this vector it is also rotation invariant.
22
• Now we move to the second part of the challenge. Let’s first state the polarization
stokes parameters S1 , S2 and S3 :
Z
S2 = 2 Re(EH (x, y)EV∗ (x, y))dx dy, (3.43)
Z
S3 = 2 Im(EH (x, y)EV∗ (x, y))dx dy. (3.44)
Now, let’s use the first vector structure in these expressions. Using the result of
Eq. (3.32) one can recognize that EH (x, y) = HG10 (x, y) and EV (x, y) = HG01 (x, y)
in this case. Therefore:
Z
S1 = |HG10 (x, y)|2 − |HG01 (x, y)|2 dx dy, (3.45)
Z
S2 = 2 Re(HG10 (x, y)HG∗01 (x, y))dx dy, (3.46)
Z
S3 = 2 Im(HG10 (x, y)HG∗01 (x, y))dx dy. (3.47)
To prove these results are zero, we can just look at Eq. (3.2). For S1 we will have
Z
S1 = (HG10 (x, y)HG∗10 (x, y) − HG10 (x, y)HG∗01 (x, y)) dx dy, = 1 − 1 = 0.
For S2 and S3 , since HG10 and HG01 are orthogonal, as stated in Eq. (3.2) (the Hermite
polynomials are orthogonal), we will also have S2 = 0 and S3 = 0. Therefore, for the
first vector structure, all polarization Stokes parameters will be equal to zero.
23
Now what is left is proving the same for the second vector structure. For this purpose,
we are going to use the expression in Eq. (3.36). From this expression, we can recognize
that
EH (x, y) = − sin(2θ)HG10 (x, y) + cos(2θ)HG01 (x, y)
and
EV (x, y) = cos(2θ)HG10 (x, y) + sin(2θ)HG01 (x, y).
Thus, the polarization Stokes parameters will be
Z (
S1 = | − sin(2θ)HG10 (x, y) + cos(2θ)HG01 (x, y)|2
) (3.48)
− | cos(2θ)HG10 (x, y) + sin(2θ)HG01 (x, y)| 2
dx dy,
Z (
h i
S2 = 2 Re − sin(2θ)HG10 (x, y) + cos(2θ)HG01 (x, y)
h i∗
) (3.49)
cos(2θ)HG10 (x, y) + sin(2θ)HG01 (x, y) dx dy,
Z (
h i
S3 = 2 Im − sin(2θ)HG10 (x, y) + cos(2θ)HG01 (x, y)
h i∗
) (3.50)
cos(2θ)HG10 (x, y) + sin(2θ)HG01 (x, y) dx dy.
24
For S2 and S3 , we can use the same argument (contributions from terms with HG10 HG∗01
and HG01 HG∗10 will be zero) and get
Z ( )
h
S2 = 2 Re −sin(2θ) cos(2θ)HG10 HG∗10 +sin(2θ) cos(2θ)HG01 HG∗01 dx dy, (3.54)
Z ( )
h
S3 = 2 Im −sin(2θ) cos(2θ)HG10 HG∗10 +sin(2θ) cos(2θ)HG01 HG∗01 dx dy, (3.55)
and we can see that these contributions should cancel out, since the integrals with
HG10 HG∗10 and HG01 HG∗01 will give the same value. Thus, finally, S2 = S3 = 0,
concluding our proof.
25