0% found this document useful (0 votes)
31 views

Computing DFT

This document discusses the Fast Fourier Transform (FFT) algorithm for computing the Discrete Fourier Transform (DFT) more efficiently than the direct method. It begins by reviewing the basic DFT formula and explaining that the direct computation takes O(N2) time. It then introduces the FFT, which breaks a DFT into smaller pieces to reduce the time complexity to O(N logN). Specifically, it describes two main FFT algorithms: decimation-in-time and decimation-in-frequency. Both approaches recursively split the DFT into smaller DFTs until individual points can be directly computed, using a "butterfly" building block. This achieves significant speed improvements over the direct method for large N.

Uploaded by

hakkem b
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Computing DFT

This document discusses the Fast Fourier Transform (FFT) algorithm for computing the Discrete Fourier Transform (DFT) more efficiently than the direct method. It begins by reviewing the basic DFT formula and explaining that the direct computation takes O(N2) time. It then introduces the FFT, which breaks a DFT into smaller pieces to reduce the time complexity to O(N logN). Specifically, it describes two main FFT algorithms: decimation-in-time and decimation-in-frequency. Both approaches recursively split the DFT into smaller DFTs until individual points can be directly computed, using a "butterfly" building block. This achieves significant speed improvements over the direct method for large N.

Uploaded by

hakkem b
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Computing the DFT

!Reviewing the basic DFT formula:


kn X[k] = x[n]WN n =0 N 1

x[n] =

1 N 1 kn X[k]WN N k =0

!Direct computation requires about 4N multiplications and 4N additions for each k (a complex multiplication needs 4 real multiplications and 2 real additions) !For all N coefficients, gives about 8N2 operations

!Can we speed this up? !Exploit symmetry characteristics of


k (N n ) kn kn " WN = WN = (WN )* kn k (n + N) (k + N)n = WN " WN = WN

kn WN

! We can use these symmetry properties to group terms in the summation to improve computational efficiency (how long it takes as a function of how big N is).

The FFT
!Algorithms for computing the DFT which are more computationally efficient than the direct method (better than proportional to N2) are called Fast Fourier Transforms. !Generally, we use FFT to refer to algorithms which work by breaking the DFT of a long sequence into smaller and smaller chunks.

Goertzel algorithm (not an FFT)


kn X[k] = x[n]WN n =0 k( N n ) = x[n]WN = n =0 N 1 N 1

n =

x[n]W

k (N n) N

n =

x[n]W

k (N n ) N

u[N n]

kn = y[N] , with h[n] = WN u[n]

!This shows that we can compute X[k] using an LTI system, as opposed to directly. Note that its a different system (called hk[n]) for every k.

!If we use this, we get H k (z) =

1 k 1 1 WN z

We then make it more complicated:


k 1 1 (1 WN z ) H k (z) = = k 1 k 1 k 1 1 WN z 1 WN z (1 WN z )

k 1 (1 WN z ) 1 2 cos(2 k / n)z 1 + z 2

!This gets rid of the complex coefficients in all the feedback (pole) terms, which actually makes it faster than the original method.

Flow graph for Goertzel algorithm:

=X[k]

!The Goertzel algorithm helps by about a factor of 2. (Unless we need only certain X[k]s, then it can help a lot.) !We can reduce the time complexity from O(N2) (compute time proportional to N2) to O(N logN) (proportional to N times logN), if we use FFTs. There are two main methods:
"Decimation-in-time "Decimation-in-frequency

Decimation in time
!Start with decimation by 2 (Fig. 9.3)
kn X[k] = DFTN x[n] = x[n]WN N 1 n =0

= = =

n even ( N / 2) 1

x[n]W +

kn N

n odd

x[n]W
(N / 2) 1

kn N

n =0

k (2n ) x[2n]WN +

n =0

k(2n +1) x[2n + 1]WN

( N / 2) 1

n =0

kn x[2n]WN /2 +

( N / 2) 1

n =0

kn k x[2n + 1]WN / 2 WN

k = DFTN / 2 x[n]even + (WN )DFTN / 2 x[n]odd

Resulting Improvement
!Original direct DFT:
2N2 steps (N2 multiplications, N2 additions)

!New way:
N/2 DFT + N/2 DFT + N mult + N add. = 2(N/2)2 + 2(N/2)2 + N + N = N2 + 2N steps
(N2/2 + N multiplications, additions)

!This is faster if N2 > 2N


"True as long as N > 2

!Basic Underlying Idea: Use 2 half-size DFTs instead of 1 full-size DFT. !Continuing the idea: We could split the half-size DFTs in half again, and keep splitting the pieces in half until the number of points left in each block is down to 2. (Then we just do those directly.) Side note: We could just as easily split into 3 pieces instead of 2 the math works out to break a DFT into any set of equal-size pieces.

8-pt DIT FFT

A 2-pt DFT:

kn DFT2 x[n] = x[n]WN n =0

x[0]

X[0]

x[0] + x[1] k = 0 = x[0] x[1] k = 1

x[1]

X[1]

Resulting 8-pt FFT:

!As long as N can be broken down into a bunch of small factors, we can use this method.
" Usually we use powers of 2 (N=2x), so that 2 is the

biggest prime factor. This gives the most improvement

!The time complexity becomes proportional to how many times we have to subdivide N before we get to the smallest block the number of FFT stages.
" For powers of 2, this is log2(N). " Overall time becomes (N log2N) instead of (N2)

Significance
!Change from N2 to Nlog2N is big:
"N=64: "N=256: "N=1024: "N=4096: "N=16384:

4,096 reduces to 65,536 1,048,576 16,777,216 268,435,456

384 2,048 10,240 49,152 229,376

!Butwe can still get a little more improvement through symmetry.

The Butterfly
!If we use N=2x, then all the FFT operations are 2-input 2-output blocks, similar to 2-point DFTs. This building block is called a butterfly.

!Since WN = WN , we can re-draw the butterfly operation, reducing each block to a single multiplication (2x improvement).

r+N / 2

Resulting 8-pt FFT

In-place computation
!Each stage of the FFT process has N inputs and N outputs, so we need exactly N storage locations at any one point in the calculations. !It is possible to re-use the same storage locations at each stage to reduce memory overhead.
" Any algorithm which uses the same memory to store

successive iterations of a calculation is called an in-place algorithm. " Computation must be done in a specific order.

Decimation in Frequency
!Instead of separating odd and even x[n], we separate odd and even X[k]. For k even:
(2r)n X[2r] = x[n]WN n =0 N 1 ( N / 2) 1 N 1

= = =

n =0

(2r )n x[n]WN +

n = ( N / 2) (2r )n x[n]WN + ( N / 2) 1

(2r)n x[n]WN

( N / 2) 1

n =0

n =0

(2r )(n + N / 2) x[n + N / 2]WN

( N / 2) 1

n =0

rn (x[n] + x[n + N / 2])WN /2

= DFTN / 2 (x[n] + x[n + N / 2])

10

!For k odd, we get similar results (with an additional multiplication):


n X[2r + 1] = DFTN / 2 {(x[n] x[n + N / 2])WN }

!Again, we can compute the DFT using 2 half-size DFTs, but this time we combine x[n] terms first, then do the DFT. !We still
" Have a sequence of butterfly elements " Can use symmetry to reduce computation " Can do in-place computation

11

Butterfly element:

Resulting 8-pt FFT

12

You might also like