0% found this document useful (0 votes)
363 views193 pages

Understanding - the.FFT Zonst 0964568187

Understanding The FFT -A Tutorial on the Algorithm and Software for Laymen, Students, Technicians and Working Engineers! includes bibliographical references and index. No part of this book may be reproduced, transmitted or stored by any means, electronic or mechanical, without written consent from the publisher.

Uploaded by

mymadi2009
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
363 views193 pages

Understanding - the.FFT Zonst 0964568187

Understanding The FFT -A Tutorial on the Algorithm and Software for Laymen, Students, Technicians and Working Engineers! includes bibliographical references and index. No part of this book may be reproduced, transmitted or stored by any means, electronic or mechanical, without written consent from the publisher.

Uploaded by

mymadi2009
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 193

.....

. ..

. .. . .

Understanding the

A Tutorial on the Algorithm & Software for Laymen, Students, Technicians & Working Engineers

Anders E. Zonst

Citrus Bess Titusville, Florida

ii

Understanding the FIT

Publishers Cataloging-in-Publication Data

Zonst. Anders E

Understanding the FFT -A Tutorial on the Algonthm &. Software

for Laymen. Students. Technicians and ~Iking Engineers! A E Zonst

p cm

Includes bibliographical references and index I Founer analysis 2 Founer transfonnations

3 Founer Transfonnations-data processing I TItle

QA403.z658 1995 ISBN 0-9645681-8-7

515.7'23

95-67942

Library of Congress Catalog Card Number: 95-67942

All rights reserved. No part of this book may be reproduced, transmitted or stored by any means, electronic or mechanical, including photocopying, photography, electronic or digital scanning and storing, in any form, without written consent from the publisher. Direct all inquires to Citrus Press, Box 10062, Titusville, FL 32780

Copyright ©1995 by Citrus Press

International Standard Book Number 0-9645681-8-7

Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

To Shirley

iv

Understanding the FIT

ACKNOWLEDGEMENTS

A special thanks to Renee, Heather and Maureen. Without their help you would have had to suffer countless instances of bad grammar, awkward syntax and incorrect spelling. Of much more importance, from my point of view, they have nice personalities and are fun to be around.

FFT/OO

v

CONTENTS

Prologue Introduction

..

VII

.

IX

I

Starting at the Bottom

1

2

Fourier Series and the DFT

27

3

The DFT Algorithm

53

4

The Inverse Transform

71

Part II - The FFT

5

Four Fundamental Theorems

81

6

Speeding-Up the DFT

103

7

The FFT

115

8 Anatomy of the FFT Program 135
Appendix 1 BASIC Programming Language 155
Appendix 5.1 DFT5.1 (Core Program Listing) 159
Appendix 5.2 DFT5.2 (Theorem Routines Listing) 162
Appendix 5.3 Proof of the Theorems 166
Appendix 6.1 DFT6.0 1 (Program Listing) 171 •

VI

Understanding the FFf

Appendix 6.2 Modification to DFT6.01 173
Appendix 6.3 Modification to DFT6.01 173
Appendix 7.1 Vector Rotation 174
Bibliography 177
Index 179 PROLOGUE

"Considering how many fools can calculate, it is surprising that it should be thought either a difficult or a tedious task for any other fool to learn how to master the same tricks.

Some calculus-tricks are quite easy. Some are enormously difficult. The fools who write the text books of advanced mathematics-and they are mostly clever fools-seldom take the trouble to show you how easy the easy calculations are. On the contrary, they seem to desire to impress you with their tremendous cleverness by going about it in the most difficult way.

Being myself a remarkably stupid fellow, I have had to unteach myself the difficulties, and now beg to present to my fellow fools the parts that are not hard. Master these thoroughly, and the rest will follow. What one fool can do, another can." (Prologue to Calculus Made Easy by Silvanus P. Thompson, F.R.S., 1910)

Even though a great many years had passed since I first obtained a copy of Thompson's magical little book (twenty-eighth printing of the second edition, Macmillan Publishing Company, 1959), I nonetheless recognized this prologue when a part of it appeared recently on the front page of a book. The reader should understand that Professor Thompson wasn't simply being sarcastic. His intention was, beyond question, to throw a lifeline to floundering students. His goal was to provide an introduction to that powerful tool known as The Calculus; to provide a bridge for those who had been victimized by their teachers and texts. Lest anyone mistake his true feelings, he adds the following in the epilogue: " ... One other thing will the professed mathematicians say about this thoroughly bad and vicious book: that the reason why it is so easy is because the author has left out all the things that are really difficult. And the ghastly fact about this accusation is that-it is true! That is, indeed, why the book has been written-written for the legion of innocents who have hitherto been deterred from acquiring the elements of the calculus by the stupid way in which its teaching is almost always presented. Any subject can be made repulsive by presenting it bristling with difficulties. The aim of this book is to enable beginners to learn its language, to acquire familiarity with its endearing sim-

viii

Understanding the FIT

plicities, and to grasp its powerful methods of problem solving, without being compelled to toil through the intricate out-of-the-way (and mostly irrelevant) mathematical gymnastics so dear to the unpractical mathematician ... " (From the Epilogue and Apology of Calculus Made Easy by Silvanus P. Thompson, 1910. Apparently some things never change.)

I cannot be sure that the coincidence of Thompson's prologue, printed boldly on the front page of an exemplary treatise on Fourier Analysis, was the sole motivation for this book-I had already considered just such an essay. Still, if Thompson's ghost had appeared and spoken to me directly, my task would not have been clearer Allow me to explain: This book is intended to help those who would like to understand the Fast Fourier Transform (FFT), but find the available literature too burdensome. It is born of my own frustration with the papers and texts available on the FFT, and the perplexing way in which this subject is usually presented. Only after an unnecessarily long struggle did I find that the FFT was actually simple-incredibly simple. You do not need to understand advanced calculus to understand the FFT-you certainly do not need deliberately obscure notation and symbols that might be more appropriate to the study of archeology. The simple truth is that the FFT could easily be understood by any high school student with a grasp of trigonometry. Understand, then, that I hold heart-felt sympathy with Thompson's iconoclasm. In fact, if you swap "FFT" for "Calculus," Thompson's strong words express my own feelings better than I am capable of expressing them myself.

But there is another, perhaps better, reason for this book.

Today, systems using the FFT abound-real systems-solving real problems. The programmers, engineers and technicians who develop, use, and maintain these systems need to understand the FFT. Many of these people have long since been "excommunicated" from the specialized groups who discuss and write about this subject. It may be acceptable for professional scholars to communicate via abstruse hieroglyphics, but working engineers and technicians need a more direct route to their tools. This book aims to provide a direct route to the FFT.

INTRODUCTION

This book is written in two parts-an introduction to (or review of) the DFT, and an exposition of the FFT. It is a little book that can be read in a few evenings at most. Recognizing this, I recommend that you start from the beginning and read it alleach chapter builds on all that has preceded. If you are already familiar with the OFT the first four chapters should read comfortably in a single evening.

I have gone as far as I can to make this subject accessible to the widest possible audience, including an appendix 1.1 which provides a "refresher" on the BASIC language. After that, the programs in Part I start out very simply with detailed explanations of each line of code in the text.

My reason for including these features is that, some years ago (before the advent of the personal computer), there was a period of several years in my life when I was "computer-less." When I once again obtained access to a computer I was shocked to find that I had forgotten the commands and rules for programming (even in BASIC). To my great relief a few hours at a keyboard (with a BASIC programming manual in hand) brought back enough to get me up and running. Appendix 1.1, and the programs of part 1, are designed to accomplish the same thing with much less pain.

In addition to these comments, I should point out that the programs presented in this book are intended to be typed into a computer and run-they actually work. If you don't like to type, a disk with all the program listings can be furnished for $5.00 (which includes postage and handling).

Very well then, the first topic we will consider is: "What, actually, is the Digital Fourier Transform?"

x

Understanding the FIT

CHAPTER I STARTING AT THE BOTTOM

It has been said that a good definition first throws the thing to be defined into a very large pool (i.e. a very broad category) and then pulls it out again (i.e. describes the unique characteristics that differentiate it from the other members of that category). That is the approach we will use in tackling the question; "What, exactly, is the Fourier series?"

1.01 APPROXIMATION BY SERIES

When we first encounter mathematical functions they are defined in simple, direct terms. The common trigonometric functions, for example, are defined with respect to a right triangle:

y

x

2

Understanding the FFT

Sin(0) = Y IH -------------------------------- (1.1)

0= angle 0 Y = height

H = hypotenuse

Cos(0) = XIH ------------------------------- (1.2)

x = base

Tan(0) = Y/X ------------------------------- (1.3)

Shortly thereafter we learn that these functions may also be expressed as a series of terms:

(1.4)

x = angle in radians

3!, 5!, etc. = 3 factorial, 5 factorial, etc.

(1.5)

FFT/OI

3

These power series are known as MaclaurenlTaylor series and may be derived for all the commonly used trigonometric and transcendental functions.

1.02 THE FOURIER SERIES

The Fourier series is a trigonometric series. Specifically, it is a series of sinusoids(plus a constant term), whose amplitudes may be determined by a certain process (to be described in the following chapters). Equation (1.6) states the Fourier series explicitly; unfortunately, this compact notation cannot reveal the

F(x) = Ao + A,Cos(x) + A2Cos(2x) + A3Cos(3x) + ... + B, Sin(x) + B2Sin(2x) + B3Sin(3x) + ...

(1.6)

incredible mathematical subtlety contained within. The Fourier series, like the TaylorlMaclauren series shown earlier, approximates functions, but it has a different derivation and a different purpose. Rather than being a means of evaluating sines, cosines, etc., at a single point, it serves as a "transformation" for the whole of a given, arbitrary, function.

This, then, is the general pool that we have thrown our Fourier Transform into, but we are at risk here of making the pool

4

Understanding the FYI'

so obscure it will require more definition than our definition itself. The newcomer may well ask; "What is this transformation you speak of?" Apparently we are going to transform the original function into another, different, function-but what is the new function and why do. we bother? Does the transformed function have some special mathematical properties? Can we still obtain the same information provided by the original function? The answer is yes to both of these questions but we will come to all of that later; for now we may say that the transform we are referring to, in its digital form, provides a mathematical tool of such power and scope that it can hardly be exceeded by any other development of applied mathematics in the twentieth century.

Now we move to the second part of our definition-we must pull the defined topic out of the pond again. This part of the definition requires that we speak carefully and use our terms precisely, for now we hope to reveal the specific nature of the Fourier Transform. We will begin with a couple of definitions that will be used throughout the remainder of this book.

1.03 FUNCTIONS

Thetennfunction (or single valued function) is, in a sense, a "loose" term (i.e. it describes a very simple notion that can be fulfilled many different ways). It only implies a set of ordered pairs of numbers (x,y), where the second number (y, the dependent variable) is a unique value which corresponds to the first number

FFf/Ol

5

(x, the independent variable). Now, obviously, the equations encountered in engineering and physics can provide sets of numbers which fulfill this definition, but so will any simple list of numbers. It is not necessary to know the equation that relates the dependent to the independent variable, nor even that the two numbers be related by an equation at all! This "looseness" is essential if the term "function" is to cover the practical work that is done with a Digital Fourier Transform (DFT), for it is seldom that the information obtained by digitizing a signal can be described by an equation.

1.03.1 Discontinuous Functions

There are functions, encountered frequently in technical work (and especially in Fourier Analysis), that are difficult to describe in a few words. For example, the "Unit Square Wave", must be defined in some manner such as the following:

f(x) = 1 [for 0 < x < x.] ------------------ (1.7)

and:

f(x) = -1 [for x, < X < 2x1] ----------------- (1.8)

6

Understanding the FIT

We make no statement about this function outside the interval of o < x < 2xl. This interval is referred to as the "domain of definition" or simply the "domain" of the function. We will have more to say about the domain of a function and its transform shortly, but for now let's continue to investigate discontinuous functions. We require two separate equations to describe this Square Wave function, but we also need some explanation: At the point x = x, the first equation ends and the second equation begins-there is no "connection" between them. The function is discontinuous at the point where it jumps from + 1 to -1. It is sometimes suggested that these two equations be connected by a straight, vertical line of "infinite slope", but this "connection" cannot be allowed. A connecting line of infinite slope would have more than one value (in fact, an infinite number of values) at the "point" of transition. The definition of a single valued function requires a single "unique" value of the dependent variable for any value of the independent variable.

f (x) • 1 _

x = 0

f(x) • -1

f(x) = Unit Square Wave

FFT/OI

7

Mathematically, functions such as the unit square wave must remain discontinuous; but, physically, such discontinuities are not realizable. All voltage square waves measured In a laboratory, for example, will havefinite rise and fall times.

1.04 THE FOURIER SERIES

Our original question was; "What, exactly, is a Fourier series?" We stated back on page 3 that it is a series of sinusoids, and as an aid to the intuition, it is frequently shown that a square wave may be approximated with a series of sine waves. Now, these sinusoidsare, in general, referred to as the "harmonics" of the wave shape, except that a sine wave whichjust fits into the domain of definition (i.e. one cycle fits exactly into the waveform domain) is called the fundamental (see fig. 1.1 A). If two cycles fit into this interval they are called the second harmonic (note that there is no first harmonic-that would correspond to the fundamental). If three cycles of sine wave fit into the interval they are called the third harmonic, etc., etc. A square wave, as described in the preceding section, consists of, in addition to the fundamental, only the "odd" numbered harmonics (i.e. 3rd, 5th, 7th, etc., etc.), all of whose amplitudes are inversely proportional to their harmonic number. Caveat: To represent a square wave perfectly by Fourier series, an infinite number of harmonic components would be required. That is to say, the Fourier series can never perfectly reproduce such functions; however, it can reproduce them to any desired degree of accuracy, just as the Taylor series shown in eqns.

8

Understanding the FIT

(1.4) and (1.5) will converge to any desired accuracy (another caveat: convergence of the Fourier series is not a simple subject-but that discussion diverges from our immediate purpose).

In figure 1.1 we show how a summation of odd harmonics begins to form a square wave. Even though we only sum in the first four components, it is apparent that a square wave is beginning to form.

• ••
• • • • •
• • •
• • •
• • • ••• •

• • • I
• • • I •
• • • •
• •
• •
• •
• •

• • •
• • 0
• ". o·
• • •
0
• • • ••• •
• • •
• • • • • •
• • 0 • • A .... Fundamental & 3rd Harmonic

1st + 3rd

• • • •

• •
• • 0 • 0
0
•• 0 • I •
I • 0
• 0
• I •
• • •
0 • • • • • •
• •
• I • • •
• •
I • •
• •
B .... Ist+3rd & 5th - I' .

• •• •

• • ••

o •

• •

• •



• • • •

• •• •

• ••

Ist+3rd+5th

FIT/Ol

o

o

• •• •

••

·0





• • •• • •
• • 0
• • •
• • • • • • •

• • • •
• • • •
• •
• c. - Ist+3rd+5th & 7th

9



• •

. . . , . .



• •

• •

" ..

• • • • • • • •



1 st+ 3rd+5th+ 7th

Figure 1.1 - Construction of a Square Wave.

In figure 1.2 below, we show the results of summing in 11 components and 101 components. We note that with 101 components the approximation is very good although not perfect. The basic idea illustrated here, however, of approximating the wave-

..' . .. . .' . . 1. ..

. '. . .. .

. '. .. . .

Summation of 11 components

................

•••••••••••••••

101 components

Figure 1.2

10

Understanding the FFf

form of a function by summ ing harmonically related sinusoids, is the fundamental idea underlying the Fourier series. The implication is that any function may be approximated by summing harmonic components (is this true?)

1.05 DISCRETE DATA

Let's take the time now to point out a distinct characteristic of the above "curves." In this book the graphs will usually show what actually happens in digital systems-they will display mathematical "points" plotted at regular intervals. While the Unit Square Wave of section 1.03.1 is discontinuous at the transition point, the functions offigs. 1. I and 1.2 are discontinuous at every point. This is not a trivial phenomenon-a series of discrete data points is not the same thing as a continuous curve. We suppose the "continuous curves," from which we extract discrete data points, are still somehow represented; but, this supposition may not be justified. This characteristic of sampled data systems, and of digital systems in general, creates idiosyncracies in the DFT that are not present in the continuous Fourier series (e.g. the subtleties of convergence, which were hinted at above, are obviated by the

finite series of the DFT). Put simply, our situation is this: If we treat discrete functions carefully, we may think of them as representing underlying linear functions. If, on the other hand, we are careless, the relationship of the discrete function to what we suppose to be the underlying linear function may be completely unfounded (perhaps we can discuss such things in another book).

FFT/OI

11

1.06.1 COMPUTER PROGRAMS

It is anticipated that most readers will have some familiarity with computer programming; but, if not, don't be intimidated. We will start with very simple examples and explain everything we are doing. Generic BASIC is pretty simple, and the examples will gradually increase in difficulty so that you should have no trouble following what we are doing. Understand, however, that these programs are not just exercises-they are the book in the same way that this text is the book. This is what you are trying to learn. Type these programs into a computer and run them--experiment with them-{but be careful, this stuff can be addictive). If you have no familiarity with BASIC at all, or if you have not used BASIC in years, you might want to read Appendix 1.1 at this time.

1.06.2 PROGRAM DESCRIPTION

The "square wave" illustration of Figures 1.1 and 1.2 is our first programming example. DFT 1.0 (next page) is essentially the routine used to generate those figures, and its operation is completely illustrated by those figures.

BASIC ignores remarks following REM statements (see line 10). Line 12 asks how many terms we want to sum together and assigns this value to N. Line 20 defines the value of PI. In line 30 we set up a loop that steps the independent variable through

12

Understanding the FFf

10 RKK *** DFT 1.0 - GBNBRATE SQUARB WAVE ***

12 INPUT -NUMBBR OF TBRMS";N

20 PI = 3.14159265358#

30 FOR I = 0 TO 2*PI STBP PI/8 32 Y=O

40 FOR J=l TO N STEP 2: Y=Y+SIN(J*I) /J: NEXT J 50 PRINT Y

60 NEXT I

70 END

Fig. 1.3 - DFT 1.0

2 *PI radians (i.e. a full cycle of the fundamental) in increments of PI/8 (if you do not understand the loop structure set up between lines 30 and 60 read appendix 1.1 now). The "loop counter" for this loop is the variable I, which we also use as the independent variable for the equation in line 40. The loop counter I steps in increments of PII8, yielding 16 data points. Line 32 clears the variable Y which will be used to "accumulate" (i.e. sum together) the values calculated in the "one line loop" at line 40. Mathematically, line 40 solves the following equation:

Y = SIN(I* J)/J (for all odd J) ------------ (1.9)

J = harmonic number

I = argument of the fundamental

Note that division by the harmonic number (J) yields values inversely proportional to the harmonic num ber. Line 40 is the heart of the program. It is a loop which counts the variable J "up"

FFf/Ol

13

from I to the number of harmonic terms we requested at line 12 (i.e. "N"). It should be apparent that we are computing the contribution of each harmonic to the waveshape at a given point on the x axis (refer to fig. 1.1 if this is not clear). Each time we pass through the loop, J is incremented by two. so that it takes on only odd hannonic values. Each time through the loop we will sum the following into the variable Y:

1) The value it already has (which is zero the first time through the loop), plus ...

2) SIN(I*J)/J.

Since I is the value of the argument (in radians) of the fundamental, it will be apparent that 1* J represents the "distance" we have progressed through the Jth harmonic component.

When all of the harmonic terms have been summed in (i.e.

J = N), we move down to line 50 and print the result. At line 60 we encounter the NEXT I statement,jump back to line 30, increase the variable I by PII8 radians, and compute all of the harmonic terms for the next position along the x axis.

1.07 EXPERIMENTATIONIPRACTICE

The reader should type the above program into a computer and run it. Once you have it working, try variations-sum up hundreds (or even thousands) of harmonic components-modify the mathematical function itself. A simple modification will produce a "ramp" or "sawtooth" function (as opposed to the squarewave). Simply allow the loop counter in line 40 (i.e. J) to

14

Understanding the FFf

step through all of the harmonic numbers (i.e, remove the optional "STEP 2" statement in line 40).

Figures l.4.x ( where "x" indicates "don't care") show some of the waveshapes that may be obtained along with the variations required to the equation in line 40. These curves illustrate the effect obtained by simple modifications of the "spectrum" (i.e, the amplitudes and phases of the harmonic components). After playing with this program, and generating a sufficiently large number of functions, we might suspect that any of the common waveshapes encounter in engineering could be produced by selecting the correct spectrum. There are an infinite number of













































































































• •

Fig. 1.4.1 - Y=Y+Sin(J*I)/J for all terms .

• • •



• •



• •



































• •

• •









• • • •







Fig. 1.4.2 - Y=Y-Cos(J*I)/(J* J) for odd terms.

FFT/Ol

15

. . . . .. . . . .

.......

• • •





















• •

• • • • • ••

,. . . . . . ....

Fig. 1.4.3 - Y=Y+Sin(l*J)/(J*J) for odd terms .

. . .. .

• • •

• •

. ....

• • •

• •





































• •

• • •

• • • •

• •

• • •

••••

Fig. 1.4.4 - Y=Y +Sin(l* J)/(J* J) for all terms.







• •









































































• •

• •

• •





Fig. 1.4.5 - Y=Y +Cos(l* J)/J for odd terms.

16

Understanding the FFf

• • • •



••

•••••••••

• •

• •

• •

• •••• •

































• •









• •

• •

• •

• •

• •

• •

Fig. 1.4.6 - Y=Y-(-I)"J*Cos(J*I)/(4*J*J-I) for all terms.

Initialize Y to Y = 0.5

combinations of amplitudes and phases for the harmonic components, which correspond to an infinite number of time domain waveshapes; unfortunately, this falls short of proving that any waveshape can be produced by this means.

In any case the above illustration has the cart before the horse. We are almost always provided with a time domain waveshape for which we must find the equivalent frequency domain spectrum. It is apparent here that one of the underlying assumptions of generalized Fourier Analysis is that time domain signals must, in fact, have frequency domain equivalents.

1.07.1 FREQUENCY DOMAIN

Figure 1.5 plots the amplitudes of the harmonic components against the harmonic number of the component, displaying

FFf/OI

17

the "spectrum" of a square wave. Now, in accordance with our earlier definition of a function, we recognize that this spectrum is itself a function. The harmonic number (or more commonly the equivalent frequency) represents the independent variable of this function, and the amplitude of the harmonic component represents



Amplitude











• • • • • •

• • • •

Frequency

Figure 1.5 - Square Wave Spectrum

the dependent variable. The total interval of the frequencies represents the domain of this new function; consequently, we refer to this function as the frequency domain function. It is this frequency domain function that we seek to obtain with Fourier Analysis (i.e, the transformation from the time domain to the frequency domain).

It should be apparent that the frequency domain function describes the same entity as the time domain function. In the time domain all of the sinusoid components are summed together into the resultant. In the frequency domain, however, we separate out

18

Understanding the FIT

the components and plot the amplitudes (and phases) of the individual sinusoids. It should be absolutely clear, then, that we are looking at the same thing here.

1.07.2 REALITY OF THE FREQUENCY DOMAIN

When first presented with the proposition that all time domain waveshapes are composed of sinusoids, we tend to question the physical reality of the components. We "know" that the time domain signal is the "real" signal and the frequency components are 'Just another way of analyzing things." Seasoned veterans, however, have no difficulty accepting the sinusoids as completely real. Let us stop here and ask, once and for all, are the sinusoids real? Or are they only mathematical gimmicks? Or is this, in fact, a moot question?

The education of electrical engineers, for example, is grounded in the frequency domain. They are taught to think in terms of the frequency domain. They are taught to test their circuits by driving the input with a sinusoid while observing the output. By repeating this test for a range of frequencies they determine the .frequency response of their circuits. As a specific example, they rarely think about audio in the time domain-music is an ever changing kaleidoscope of fundamentals and harmonics. Elsewhere, they learn that modifying the frequency response of a circuit in certain ways will achieve predictable modifications to the time response, e.g. low pass filtering will reduce the higher frequency components thereby reducing noise, slowing rise times, etc. This sort of experience, coupled with the knowledge that waveforms can be viewed as summations of sinusoids, leads the student into a paradigm that actually prefers the frequency domain. Engineers can always arrive at completely logical and selfconsistent conclusions in the frequency domain, and frequently

FFT/OI

19

with much less work than in the time domain. After working in the field for a few years the notion of frequency domain takes on a sense of reality for engineers and technicians that others may not share.

10 UOL T5

2E-7 2E-7

lEi

500 SE-l0

SE3

- .

2.7E3

3.6E3

- .

Fig. 1.6 - Astable Multivibrator

Let's look at a concrete example--suppose we build an astable multivibrator and use it to generate a square wave (actually, astable multivibrators do not produce waveshapes that are very "square", so a "buffer" stage is added in the schematic above). When we view the output waveshape we might justifiably ask, "where are all the sine waves?" (See Fig. 1.7 below.) On the other hand, we could synthesize a square wave by com bining the outputs of thousands of sine wave generators just as we did with the computer program several pages back. When we had finished synthesizing this waveform, we would have produced the same thing

20

Understanding the FFr

nULTIUI11.HET

12. 00 r---~- __ -""""-~---r-~---'--r---,.....---'

......... 1 .. 11...... •••••••• •••••••• •••••••• •••••••• • ..

9 ISO ------- ~ _

...... , "'''''~ 1· .. ····•· .•• .•.. . 1 •••••

7. 20 ~~ ~.~.. .•.•...•••.•.• '+-.- - .. -.~ ~--t

4.80 ----- t-;q- --

......... '1·····'·· '1 ••••• , ••••• , •••• , , ,.

2 . 40 r..-:-:.-:- -:-:. . . . .. . ...-:-:.. .. .. . .. .. ::-: ..

I

0.00 L-~~ __ ~~~ __ ~ __ _. __ ~ __ ~ ~~

1. ISO

3.20 4.80

TI •• In .S<8.0000)

6.40

8.00

Fig. 1.7 - Output Wavefonn of Astable Multivibrator

the astable muItivibrator produced-a square wave (allowing that our generators produced harmonics that extended beyond the bandwidth of the testing circuitry). Ifwe took some instrument (such as a wave analyzer or spectrum analyzer) that was capable of measuring the harmonic components of our synthesized wave, we would expect to find each of the sine wave components just

NOTE: Each generator is a sine wave signal source. The frequencies are odd multiples of the "fundamental" Vl(t) generator and the amplitudes are inversely proportional to their frequency.

Fig. 1.8 - Square Wave Synthesizer

FFr/Ol

21

summed into that wave shape. But the time domain wave shape of the synthesizer is the same as the output of our multivibrator. If we use our wave analyzer on the multivibrator output, we will surely find the same components that we found in the synthesized wave, because the two time domain wave shapes are the same. The two are equivalent. A summation of sinusoids is the same thing as the time domain representation of the signal. That is what the examples of Fig. 1.4.x illustrate. A multivibrator may be thought of as a clever device for simultaneously generating a great many sinusoids. The only difference is in our perception--our understanding.

I I I , I I
I , , , , 1 I
..J... ....... ..J... ~ _._ + _._ + ....... ..J... + _._ + ~ + _._ ~ _._ +
I -
, I , I , • ,
I , I I I I , I I , I , I , ,
, I I I I I , I I
- + _,_ + - + _._ -r- _,_ + .....I.- + _._ ~ _._ + .....I.- + _,_ + -
I I , , I I I , ,
, , ~ I I I' 1] I I I r I I I 1 I , I
, , I I I
.....I.- ~ + _,_ _,_ -+- + _._ 1 _,_ ~ :1 + _,_ _,_ + -
, I , I
I I I I I I I I , I I
I I I I I
1ill , , , ,I
I I , II I I 'I I I
, , I I, , -;i
I _._ -r- ~+7 ~ + + _._ _,_ + ...L..I + _._ _._ + -
, , I , I : I I I
J , J , I I , [ I I t
I I
, , , , , I I
_,_ + _._ .... _._ + _,_ + _,_ + _,_ + _._ + _,_ + _,_ + _,_ +
, , I , I I I I I I
I I I I I I I , I I I I I I I , I I
, I I , I I I I I
-- - - + _._ + _,_ + ...I- + .....I.- + ..J... + - + .....l.- -+- _,_ +
I I I I . . I
I , I I I I , I I
- - - - - -
Fig. 1.9 - Synthesizer Waveform 1.08 WHAT IS THE DFr?

The DFT is a procedure, or process, that can analyze the data points of a "digitized" time domain function to determine a series of sinusoids which, when summed together, reproduce the data points of the original function. The resulting Digital Fourier series is a valid expression of the original function, just as the Taylor series examples given in section 1.01 are valid expressions of sines, cosines, etc. It is apparent, however, that the Digital

22

Understanding the FIT

Fourier Transform is different from the Taylor series, although the exact nature of the difference may still be less than completely obvious. Let's take a moment and focus precisely on some of the differences: the Taylor series, as illustrated in equations (1.4) and (1.5), evaluate a specific function at a given argument. The coefficients for any specific series are determined once, and thereafter never change. The Taylor series is used in calculators and computers to evaluate a sin, cosine, exponential, etc., etc. for a given argument, and when we use the Taylor series, only the argument changes. Now the DFT is a process used to determine the coefficients of a trigonometric series for a given function (i.e, we analyze an arbitrary function to determine the amplitudes (the coefficients) for a series of sinusoids). In contrast to the Taylor series, the arguments of a DFT function are fixed and usually remain unchanged; when operating in the frequency domain it is generally the coefficients of the transformed function that we modify. Obviously, the Fourier series and the Taylor series have completely different purposes.

What, then, is the purpose of the DFT? A great many procedures, techniques and theorems have been developed to work with functions in the frequency domain. As it turns out, in the frequency domain we may easily perform relatively difficult mathematical techniques like differentiation, integration, or convolution via simple multiplication and division (in some cases this is the only way we can perform these operations). At a higher level of problem solving, we can perform minor miracles. We may, of course, examine the frequency spectra of time domain waveshapes, and taking the next obvious step, perform digital filtering. From here it is only a small step to enhance photographic images bringing blurry pictures into sharp focus, but we may continue along this line of development to remove image distortions due to aberrations in the optical system (re: the Hubble telescope). We can do other things that may not be so obvious such as speed up the playback of recorded messages without changing pitch, or convert a television format from 50 frames/sec

FFT/OI

23

to 60 frames/sec without speeding up the action. Working in the frequency domain we can perform still more amazing things than these, but the best is certainly yet to come as a new generation of scientists and engineers continue to explore and develop the field of Digital Signal Processing (DSP).

There is a difference between the Taylor and Fourier series that we still may not have made apparent. The terms of the Taylor series are summed to evaluate the function at a single point (i.e at some specific argument). The transformation and inverse transformation of the DFT, on the other hand, involves all of the values of the function within the domain of definition. That is, we transform the whole function. When we speak. of a function proper, we are not talking about the value of the function at any specific point, but rather, we are talking of the values of all of its points. It is one of the little marvels of the DFT that it can transform all of the points of a function, as it were, simultaneously, from the time domain to the frequency domain-and then back to the time domain again via the inverse transform.

1.09 WHAT IS THE FIT?

What the FFT is, of course, is the question we will spend most of this book answering. For the moment though it will be worthwhile to present an analogy which shows clearly what we hope to accomplish. In calculators and computers the approximation of functions such as SIN(X), COS(X), ATN(X), EXP(X), etc., etc., may be obtained by the Taylor series (as we explained previously); but, there is a problem in applying these series directly-they are too slow! They take too long to converge to the accuracy required for most practical work. Use of the Taylor series would be severely limited had not our friends, the mathematicians, figured out the following way to make it run faster:

24

Understanding the FIT

We observe, as a practical matter, that all of the different series required are of the polynomial form:

(1.10)

where the A,. terms must be substituted into the polynomial for the specific function being evaluated (see eqns 1.4 and 1.5 for examples). The "Horner Scheme" takes advantage of this generalization by solving the polynomial in the following form:

F(x) = Ao + x(AI + x(A2+x(A3+ .. +(xA..) .. )) --- (1.11)

where we have repeatedly factored x out of the series at each succeeding term, Now, at the machine language level of operation, numbers are raised to an integer power by repeated multiplication, and an examination of (1.10) and (1.11) above will show that for an Nth order polynomial this scheme reduces the number of multiplications required from (N2+N)/2 to N. When one considers that N ranges upwards of 30 (for double precision functions), where the Horner Scheme yields execution times an order of magnitude faster, the power of this algorithm becomes apparent.

The above is particularly prophetic in our case. The OFT, although one of the most powerful weapons in the digital signal processing arsenal, suffers from the same malady as the Taylor series described above-when applied to practical problems it tends to bog down-it takes too long to execute. The FFT, in a way that is quite analogous to the Horner Scheme just described, is an algorithm that greatly reduces the number of mathematical

FFf/OI

25

operations required to perform a OFT. Unfortunately the FFT is not as easy to explain as the Homer Scheme; although, as we shall see, it is not as difficult as the literature usually makes it out to be either.

1.10 CONCLUSIONI HISTORICAL NOTE

Throughout this chapter we have repeated the proposition that physically realizable waveshapes can always be represented as a summation of sine and cosine waves. We have also discussed things such as the nature of "functions", etc., but the summation of sinusoids has obviously been our central theme. This proposition is the foundation of Fourier Analysis. The primary purpose of this chapter has been to convey this fundamental idea.

The widespread use of Fourier Analysis implies this proposition is valid; still, when we are presented with a concept whose logical foundations are not readily apparent, our natural curiosity makes us wonder how it came about. Who was the first to discover it, and how did they figure it out? What made someone suspect that all functions could be represented as a series of sinusoids? Early on we saw that the summation of sinusoids could produce complex looking waveshapes. A perceptive soul, recognizing this fact, might well move on to investigate how far this process could be extended, what classes of functions could be evaluated by this method, and how the terms of each such series could be detenn ined.

F(x) = Ao + A)Cos(x) + A2Cos(2x) + A3Cos(3x)+ ... + B)Sin(x) + B2Sin(2*x) + B3Sin(3x)+ ..

(1.11)

Daniel Bernoulli, in the 18th century, recognized that

26

Understanding the FIT

functions could be approximated by a trigonometric series, and many mathematicians worked with the notion afterward, but it was Jean Baptiste Joseph Fourier, in the 19th century, who demonstrated the power of this technique as a practical, problem solving tool. We might note that this did not bring Fourier immediate praise and fame, but rather, harsh criticism and professional frustration. His use of this technique was strongly opposed by no less a mathematician than Lagrange (and others). Lagrange was already familiar with trigonometric series, of course, but he also recognized the peculiarities of their behavior. That trigonometric series were universally applicable was not at all obvious at that time.

The point here is that Fourier did not completely understand the tool he used, nor did he invent it. He had no proof that trigonometric series could provide universally valid expressions for all functions. The picture we see here is of brilliant men struggling with mathematical concepts they cannot quite grasp, and we begin to realize that the question, "Who invented Fourier Analysis?" is somewhat naive. There was no single great flash of insight; there were only many good men working tirelessly to gain understanding. Today no one questions the application of Fourier Analysis but, in fact, Lagrange was correct: there are functions that cannot be transformed by Fourier's method. Fortunately, these functions involve infinities in ways that never occur in physically realizable systems, and so, Fourier is also vindicated.

Books on Fourier Analysis typically have a short historical note on the role of J.B J. Fourier in the development of trigonometric series. Apparently there is a need to deal with how a thing of such marvelous subtlety could be comprehended by the human mind-how we could discover such a thing. While the standard reference is J. Herivel, Joseph Fourier, The Man and the Physicist, Clarendon Press, one of the better summations is given by R.N. Bracewell in chapter 24 of his text The Fourier Transform and its Applications, McGraw Hill. He also sheds light on the matter in the section on Fourier series in chapter 10.

CHAPTER II

FOURIER SERIES AND THE DFT

2.0 INTRODUCTION

It is assumed that most readers will already be familiar with the Fourier series, but a short review is nonetheless in order to re-establish the "mechanics" of the procedure. This material is important since it is the foundation for the rest of this book. In the following, considerations discussed in the previous chapter are assumed as given.

2.1 MECHANICS OF THE FOURIER SERIES

You may skip section 2.1 with no loss of continuity. It is the only section in this book that employs Calculus. The Fourier series is a trigonometric series F(f) by which we may approximate some arbitrary function f(t). Specifically, F(f) is the series:

F(t) = Ao + A1Cos(t)+B1Sin(t) + A2Cos(2t)+B2Sin(2t) + ...

... + AnCos(nt)+ BnSin(nt) ----------------------- (2.1)

28

Understanding the FFf

and, in the limit, as n (i.e. the number of terms) approaches infinity:

]F(f) == f(t) -------------------------------------

(2.2)

The problem we face in Fourier Analysis, of course, is to find the coefficients of the frequency domain sinusoids (i.e. the values of Ao' A1, ••• Ao' and B1, ••• Bo) which make eqn. (2.2) true.

Finding Ao is easy-if we integrate F(f) (i.e, eqn. 2.1) from 0 to 21t, all sinusoid terms yield zero so that only the Ao term is left:

[F(f) dt = Ao21t --------------------

(2.3)

From eqn.(2.2) and the following condition:

1/21t

J 2"

of(t) dt = mean value

(2.4)

it follows that:

A ==

o

I2•

1/21t of(t) dt == mean value -----

(2.5)

FFf/02

29

Next, if we multiply both sides of eqn.(2.1) by cos(t) and integrate from 0 to 21t, the only non-zero term will be:

[F(f)cOS(t) dt - C".,COS2(t) dt = nA, --- (2.6)

This results from the fact that:

J 21t

o Cos(rx)Cos(qx) dx = 0

(2.7.1)

J 21t

o Cos(rx)Sin(px) dx = 0

(2.7.2)

[SiR(rx)SiR(qX) dx = 0

Where: r, q, and p are integers and r =1= q

(2.7.3)

From eqns.(2.2) and (2.6) then we may evaluate At:

A, = lin [f{t)COS(t) dt ----------- (2.8)

From the same argument, if we multiply eqn.(2.1) by Sin(t) and integrate from 0 to 21t, the only non-zero term will be:

30

Understanding the FIT

CX B I S irr'( t) dt = ltB I ------------------

(2.9)

We may therefore evaluate B1:

B, = lilt [f(t)Sin(t) dt -----------

(2.10)

If we continue through the other terms of eqn. (2.1) we will find that the procedure for determining the A and B coefficients may be summarized by the following:

A. = 1/21t [f( t) dt -----------------

(2.11 A)

Ak = lilt J :f(t)cOS(kt) dt ----------

(2.11 B)

B, = lilt [f(t)Sin(kt) dt ----------

(2.11 C)

With: k = 1, 2, 3, ..... , n

n = number of terms included in the series.

As n approaches infinity, we must necessarily include all possible sinusoidal (by sinusoidal we imply both sine and cosine) components, and F(f) converges to f(t).

FFf/02

31

COMMENTARY

We should take the time to point out a few things about the above derivation. Our starting equations (2.1) and (2.2) simply make the statement that a given arbitrary function f(t) may be considered to be a summation of sinusoids as explained in the previous chapter. It is well known that functions exist for which this condition is untrue; fortunately, it is true for all physically realizable systems.

Equations (2.8) and (2.10) express the mathematical operation that is the heart of the Fourier series; the individual sinusoids of a composite wave can be "detected" by multiplying through with unit sinusoids and finding the mean value of the resultant. This process is all that the OFT (and FFT) does.

The reader should understand that the relationships expressed in equations (2.7.1) through (2.7.3) (i.e. the orthogonality relationships) are imperative to the proper operation of this algorithm; furthermore, these equations are true only when evaluated over an integer number of cycles. In practice the Fourier series, OFT, and FFT force this condition for any time domain To by restricting the arguments of the sinusoids to integer multiples of 21tNtlTo (where N is an integer).

In addition to these comments, the OFT deals with arrays of discrete, digital data. There are no linear, continuous curves in a computer. We will spend the rest of this chapter delving into how we apply the mathematical process described so neatly above to the digitized data we process inside a computer.

32

Understanding the FFf

2.2.0 MECHANICS OF THE DIT

The DFT is an application of Fourier Analysis to discrete (i.e. digital) data. Our objective in this chapter is to find out what makes the DFT work-and why. At considerable risk to personal reputation, we will employ only simple, direct illustrations. It would be safer to brandish the standard confusion and abstruse mathematics; but then, there are plenty of books already on the market to fulfi 11 that requirement. We will start from the concepts covered in the previous chapter and develop our own process for extracting harmonic components from arbitrary waveforms.

2.2.1 THE OBJECTIVE OF THE DIT PROCESS

We saw in chapter 1 that sinusoids could be summed together to create common waveforms. Here we consider the reverse of that process. That is, given some arbitrary waveshape, we try to break it down into its component sinusoids.

Let's talk about this for a moment. When we are presented with a composite waveshape, all of the sinusoids are "mixed together, It sort of like the ingredients of a cake----our question is, "how does one go about separating them?" We already know, of course, that it is possible to separate them (not the ingredients of cakes-the components of composite waveforms), but let's forget for a moment that we know about Fourier Analysis ...

FFf/02

33

2.2.2 REQUIREMENTS FOR A DFf PROCEDURE

There are two, perhaps three, requirements for a method to "separate out" the components of a composite wave:

I) First, we require a process that can isolate, or "detect", any single harmonic component within a com plex waveshape.

2) To be useful quantitatively, it will have to be capable of measuring the amplitude and phase of each harmonic component.

These, of course, are the primary requirements for our procedure; but, there is another requirement that is implied by these first two:

3) We must show that, while measuring any harmonic component of a composite wave, our process ignores all of the other harmonic components (i.e, it must not include any part of the other harmonics). In other words, our procedure must measure the correct amplitude and phase of the individual harmonics. Very well then, let's see how well the DFT fulfills these requirements.

2.2.3 THE MECHANISM OF THE DFf

Let's begin with a typical digitized sine wave as shown in Figure 2.1 below. The X axis represents time; the Y axis represents volts (which, in tum, may represent light intensity, or pressure, or etc.) It has a peak amplitude of 2.0 volts, but its average value is obviously zero. The digitized numbers (i.e. our function) are stored in a nice, neat, "array" within the computer

34

Understanding the FIT

(see Table 2.1). The interval from the beginning to end of this array is the domain (see first column Table 2.1).

1.0

-1.0

• • •

• •

• •

• •





















• •

• •

• •



• • •

Figure 2.1 - Digitized Sinusoid

Now, according to Fourier, the way to detect and measure a sine wave component within a complex waveshape is to multiply through by a unit amplitude sine wave (of the identical frequency), and then find the average value of the resultant. This is the fundamental concept behind Fourier Analysis and consequently we will review it in detail. First, we create a unit amplitude sine wave (see Fig. 2.2). To multiply our digitized waveform by a unit sine wave we multiply each point of the given function by the corresponding point from the unit sinusoid function (apparently the two

FFf/02

35

• • • • •
• •
• • unit sinusoid
• •
• •
• •
• •
• •
• •
• •
• • • •
• • •
2.0 • • • • •
• • •
• • • • • •
• • • • • •
• •
• • • •
• .: •
• • • •
• •
• • • •
• • • •

product of sinusoids • •
/. •
• •
digitized sinusoid (2 volts peak) • •
• •
-2.0 • •
• • • Figure 2.2 - Fourier Mechanism

functions must have the same number of data points, corresponding domains, etc.) This process (and the result) is shown in Figure 2.2 and Table 2.1. The reason this works, of course, is that both sine waves are positive at the same time (yielding positive products), and both are negative at the same time (still yielding positive products), so that the products of these two functions will have a positive average value. Since the average value of a sinusoid is normally zero, this process has "rectified" or "detected" the original digitized sine wave of Fig. 2.1. The sum of all of the products is 16 (see Table 2.1 below), and since there are 16 data points in the array, the average value is 1.0.

36

Understanding the FFf

T Sin (X) * (2*Sin (X» = 2*Sin2(X)
0.00000 0.00000 0.00000 0.00000
0.06250 0.38268 0.76537 0.29289
0.12500 0.70711 1.41421 1.00000
0.18750 0.92388 1.84776 1.70711
0.25000 1.00000 2.00000 2.00000
0.31250 0.92388 1.84776 1.70711
0.37500 0.70711 1.41421 1.00000
0.43750 0.38268 0.76537 0.29289
0.50000 0.00000 0.00000 0.00000
0.56250 -0.38268 -0.76537 0.29289
0.62500 -0.70711 -1.41421 1.00000
0.68750 -0.92388 -1.84776 1.70711
0.75000 -1.00000 -2.00000 2.00000
0.81250 -0.92388 -1.84776 I. 70711
0.87500 -0.7071 I -1.41421 1.00000
0.93750 -0.38268 -0.76537 0.29289
Totals = 0.00000 0.00000 16.00000
Average Value = 16.0/16 = 1.00000
Table 2.1 Note that the average amplitude of the right hand column is only half of the peak amplitude of the input function (3rd column). We may show that the average value obtained by the above procedure will always be half of the original input amplitude as follows:

The input function is generated from the equation shown on the following page:

FFf/02

37

F leT) = A sin(21tt) ----------------------

(2.12)

Where: t = Time

A = Peak amplitude

NOTE: A frequency (or period) of unity is implied here

We simplify by replacing the argument (21tt) with X:

F(X) = A Sin(X) ----------------

(2.13 )

Multiplying through by a sine wave of unit amplitude:

F(X) Sin(X)

= A Sin(X)Sin(X) ---= A Sin2(X) ---------

(2.l4) (2.14A)

However, from the trigonometric identity:

S in2(X) = 1/2 - Cos(2X)/2 -------

(2.15)

which we substitute into (2. 14A):

F(X) Sin(X) = A (1/2 - Cos(2X)/2)

= Al2 - ACos(2X)/2 -- (2.16)

The second term of (2.16) (i.e, A Cos(2X)/2) describes a sinusoid so that its average value will be zero over any number of full cycles; it follows that the average value of eqn. 2.16 (over any integer multiple of 21t radians) will be Al2 (see figure 2.3).

This result is more or less obvious from an inspection of Figs. 2.2 and 2.3. It is apparent that the maximum value will occur at the peaks of the two sinusoids, where the unit sinusoid has an

38

Understanding the FFf

peak amplitude = A

average amplitude = Al2



• •







- .-

- .-

- - .-

















Figure 2.3 - A Sin' Wave

amplitude of 1.0 and the product of the two functions is A. The minimum value will occur when the sinusoids are passing through zero. From the symmetry of Fig. 2.3 it is apparent that the average value must be Al2.

This process of detecting or rectifying sinusoids, then, has the characteristic of yielding only half the amplitude of the actual component. This presents no major problem though, as we can simply multiply all of the results by two, or use some other technique to correct for this phenomenon.

2.2.4 THE COSINE COMPONENT

It is obvious that this scheme will work for any harmonic component; we need only change the frequency of the unit

FFT/02

39

amplitude sine wave to match the frequency of the harmonic component being detected. This same scheme will work for the cosine components if we replace the unit sine function with a unit cosine function.

The component we want to detect is given by:

F2(T) = A COS(21tt) = A COS(X) -----

(2.17)

Where: All terms are as in (2.12) and (2.13) above.

Multiplying through by a cosine wave of unit amplitude:

F(X) COS(X) = A COS(X) COS(X) --

= A COS2(X) ----------

(2.18) (2.18A)

From the identity:

COS2(X) -:- 1/2 + Cos(2x)/2

(2.19)

which we substitute into (2.18A):

F(X) COS(X) = A (1/2 + Cos(2X)/2)

= Al2 + A Cos(2X)/2 ---

(2.20)

Again, the second term will have a zero average value while the first term is one half the input amplitude. Note carefully in the above developments that, to produce a workable scheme, we must design our system such that we always average over full cycles of the sinusoids.

40

Understanding tbe FFT

2.2.5 HARMONICS WITHIN COl\1POSITE WAVEFORMS

Next question: "What about the other harmonics in composite waveforms? The scheme described above undoubtedly works when we are dealing with a single sinusoid, but when we are dealing with a composite of many harmonics, how do we know that all of the other harmonics are completely ignored? You will recall that our third requirement for a method to extract sinusoidal components from a composite wave was that the process ignore all but the sinusoid being analyzed. Technically, this condition is known as orthogonality.

2.2.6 ORTHOGONALITY

What, exactly, does the term orthogonality imply? Two straight lines are said to be orthogonal if they intersect at right angles; two curved lines are orthogonal if their tangents form right angles at the point of intersection. Consider this: the "scaler product", or "dot product" between two vectors is defined as follows:

A'B = I A I I B I Cos (0) ------------------

(2.21)

I A I = magnitude of vector A, etc.

As the angle 0 between the two vectors approaches + 90 degrees, Cos 0 approaches zero, and the dot product approaches zero. It is apparent, then, that a zero dot product between any two finite vectors implies orthogonality. It is apparent that zero magnitude

FFT/02

41

A"B = AB Cos ~

Projection of A onto B

Aproj = A Cos 0 ..

Figure 2.4 - Dot Product

vectors will always yield a zero dot product regardless of the angle (in fact, the notion of angle begs for definition here). In practice we effectively define vectors of zero magnitude as orthogonal to all other vectors, and may therefore use a zero resultant from equation (2.21) as the operative definition for orthogonality.

DEFINITION 1:

If A'B = 0 then A and B are orthogonal.

Note that, by this definition, zero magnitude vectors are orthogonal to all other vectors.

The definition of orthogonality between whole functions derives from an argument not completely dissimilar to the above. To illustrate, we will first use equation (2.21) to generate a function F( cp). We will do this with a specific example where A is

42

Understanding the FFT

a unit vector lying along the X axis and B is a unit vector in the direction of cP (where cP takes on a set of values over the domain of o to 21t). Table 2.2 shows this function explicitly-note that, according to our definition above, the two vectors are orthogonal at only two points (i.e. ~ = 1t/2 and 31(/2).

q, F{q,>
0 1.000
7r/8 0.924
7r/4 0.785
37r/8 0.383
7r/2 0.000
57r/8 -0.383
37r/4 -0.785
77r/8 -0.924
7r -1.000
97r/8 -0.924
57r/4 -0.785
117r/8 -0.383
37r/2 0.000
137r/8 0.383
77r/4 0.785
157r/8 0.924 Table 2.2 - F(~) = I A I I B I cos(~)

But our concern is not for individual values of a function; our concern is for orthogonality between whole functions. To investigate this situation we need a second function, G(cp), which we will define as follows:

G(cp) = I A I I B I cos(cp+1t/2)

(2.22)

FFT/02

43

Let's talk about these two functions, F( <p) and G( <p), for a moment. For both functions vector A lies along the X axis, while B "rotates" about the origin as a function of <p. The difference between these two functions is that the argument of G( <p) is advanced by 1C/2 (i.e. 90 degrees) so that the vector B of G( cI» will be orthogonal to Bin

q, F{q,) G{q,) F{q,)G{q,)
0 1.000 0.000 0.000
7r/8 0.924 -0.383 -0.354
7r/4 0.785 -0.785 -0.616
37r/8 0.383 -0.924 -0.354
7r/2 0.000 -1.000 0.000
57r/8 -0.383 -0.924 0.354
37r/4 -0.785 -0.785 0.616
77r/8 -0.924 -0.383 0.354
7r -1.000 0.000 0.000
97r/8 -0.924 0.383 -0.354
57r/4 -0.785 0.785 -0.616
117r/8 -0.383 0.924 -0.354
37r/2 0.000 1.000 0.000
137r/8 0.383 0.924 0.354
77r/4 0.785 0.785 0.616
157r/8 0.924 0.383 0.354
Sum Total 0.000 Table 2.3 - F( <p )G( <p)

F( <p) for all values of <p. In other words, the vectors which actually generate these two functions are orthogonal at all points of the functions.

Now the two functions created here are not vectors; being scalar products of vectors, they are scalars. Therefore, the definition of orthogonality given above is inappropriate. Still, these two

44

Understanding the FFT

G(0) = AB Cos 1/)+1C/2

F(~) = AB Cos ~

II + rr./2

A

Fig. 2.5 - Orthogonal Functions F(cp) and G($)

functions were created by orthogonal vectors and we would like to find out if there is some vestige-some latent characteristic-that can still expose the orthogonality relationship. In fact, there is. If we multiply these two functions point by point, and then sum all of the individual products, the result will be zero (see Table 2.3). You may recognize that this process is essentially the process we used to detect sinusoids back in section 2.2.3. If so, you may also begin to grasp the connection we are trying to make here, but bear with me for a moment longer. Let's generalize the above results as follows:

G(cp) = I A I I B I cos(cp+O)

(2.23)

Ifwe replace the 1t/2 in G(cp) by the variable 0, and repeat the test for orthogonality using various values of 0 (see Table 2.4), we see that a zero resultant is achieved only when 0 = 1t/2, indicating orthogonality between the vectors which generate the functions.

FFT/02

45

N

E F{q,)iG{q,)i i=O

o 11"/8 11"/4

311"/8

11"/2 511"/8 311"/4 711"/8

11"

8.000 7.391 5.657 3.061 0.000

-3.061

-5.657

-7.391

-8.000

Table 2.4 ~ Orthogonality Test

So then, this process does indeed detect "orthogonality" between our functions. It is not necessary to prove this relationship for all cases, for at this point we simply adopt this procedure as our operative definition for orthogonality between functions. If we compute products between all of the points of the functions, and the summation of these products is zero, then the functions are orthogonal. Newcomers sometimes find this remarkable, for by this definition, we no longer care how the functions were generated. Nor do we ask that the functions be traceable to geometrically orthogonal origins in any sense. The only requirement is that the summation of the products of corresponding points between the two functions be zero.

DEFINITION 2:

N

If L F(CP)iG(~)i = 0 then F(cp) and G(cp) are orthogonal

i=O

46

Understanding the FFf

Orthogonality then is a condition which applies directly to the process we use as the Fourier mechanism. Some functions will be orthogonal (i.e. they will always give a zero resultant when we perform the process described above) and others will not. Obviously the example of section 2.2.3 (i.e. two sine waves of identical frequency) does not illustrate orthogonal functions.

The question here, however, is whether two sinusoids of different, integer multiple frequencies represent orthogonal functions. This, of course, is an imperative condition. If they are orthogonal, components that are not being analyzed will contribute zero to the resultant-if they are not orthogonal we have BIG problems. Let's see how this works out.

As before, we start with a digitized sine wave but this time we multiply through with a sine wave of twice the frequency (Fig. 2.6 below). By symmetry it is more or less apparent that this yields an average value of zero. Apparently we have orthogonal functions here, but we need a demonstration for the general case of any integer multiple frequency.

1.0

-1.0

1.0

-1.0

• • • • • • •• / Sin(x)

• •

. .. . .

Sin(2x)

./



















• •







• • •

• •

. . . . . . . .

product Sin(x)Sin(2x)

-:



• •

















Figure 2.6 - Sin(x)Sin(2x)

FFT/02

47

Starting with the identity:

Sin(A)Sin(B) = Cos(A-B) - Cos(A+B) 2

(2.24)

if we let A and B represent arguments of:

A = 21tt and B = NA = 2N1tt

N = 1,2,3,4, ... (i.e. N takes on integer values)

eqn. (2.24) becomes:

Sin(A)Sin(NA) = Cos(A(1-N» - Cos(A(I+N» (2.25)

2

It is interesting to note when N= 1, the term Cos(A( 1-1» yields a value of I regardless of the value of A (i.e. all values of 21tT), and eqn.(2.25) reduces to:

Sin(A)Sin(A) = 1 - Cos(2A) ------------ 2

(2.26)

which is the same equation as (2.15) above. The term Cos(2A) generates a sinusoid as A varies from 0 to 21t and must therefore have a zero average value.

Ifwe consider all other positive values ofN in eqn.(2.25) we will always obtain a non-zero value for the argument of both terms. Consequently, both terms on the right in eqn.(2.25) will generate sinusoids, which guarantees an average value of zero for the function (averaged over any number of full cycles).

48

Understanding the FFT

The second case we must examine is for two cosine waves of different frequencies. As before we start with an examination of the trigonometric identity:

Cos(A)Cos(B) = Cos(A+B) + Cos(AooB) _00 __ - 2

(2.27)

When B = NA then (2.27) becomes:

= Cos(A(I+N» - Cos(A(I-N» 2

(2.27A)

which shows that these functions are also orthogonal except when N=l and the arguments (i.e, the frequencies) are identical.

Cos(x) Cos(3x)
1.0 '. / .. \ • • • •
• • • •
• • • •
• • • •
• • • •
• • • •
• • • •
• • • •
• • • •
• • • •
-1.0 • • · . : . : . . • • 1.0





• •













• •





• •











• •

• •

• •

• •

-1.0 t

-,

product Cos(x)Cos(3x)

Figure 2.7 - Cos(x)Cos(3x)

Finally, we must examine the relationship between the cosine and sine functions for orthogonality. This can be shown by

FFT/02

49

the following identity:

Sin(A)Cos(B) = Sin(A+B) + Sin(A-B) 2

(2.28)

and when B = NA:

= Sin(A(1 +N)) + Sin(A(I-N)) ------ (2.28A)

2

In this case it makes no difference whether the N = I or not; jf the argument is zero the value of the sine function is likewise zero; if the argument is multiplied by an integer the function will trace out an integer number of cycles as A varies from 0 to 21t. In no case will the average value of these functions be other than zero-they are always orthogonal.

Sin(x) Cos(x)
1.0 I -,
• • • • • • • • • • •
• •
• • •
• • • •
• • • •
• • • •
• • • •
• • • •
• • • •
• • •
-1.0 • • • • • • • • • • •• • • 1.0

• • •

• •

• • •

• •

















• •

• • •

• •

• • •

-1.0

product Cos(x)Sin(x)

Figure 2.8 - Cos(x)Sin(x)

50

Understanding the FFf

2.2.6 THE DFTIFOURIER MECHANISM

Finally, we must consider all of this together. We know that the composite waveform is generated by summing in harmonic components:

F(f) = Ao + A,Cos(t)+B1Sin(t) + A2Cos(2t)+B2Sin(2t) + ...

+ AnCos(nt)+BnSin(nt) -------------- (2.29)

If we multiply this composite function by Sin(Kt) (or, alternatively, CosiKtj), where K is an integer, we will create the following terms on the right hand side of the equation:

AoSin(Kt)+ A,Sin(t)Sin(Kt)+ B,Cos(t)Sin(Kt)+ ... +A~in2(Kt) + BkCos(Kt)Sin(Kt) + ...

+AnSin(nt)Sin(Kt)+BnCos(nt)Sin(Kt) (2.30)

Treating each of these terms as individual functions, if the argument (Kt) equals the argument of the sinusoid it multiplies, that component will be "rectified." Otherwise, the component will not be rectified. From what we have shown above, two sinusoids of + 1t/2 phase relationship (i.e. Sine/Cosine), or integer multiple frequency, represent orthogonal functions. As such, when summed over all values within the domain of definition, they will all yield a zero resultant (regardless of whether they are handled as individual terms or combined into a composite waveform). That, of course, is precisely what we demanded of a procedure to isolate the harmonic components of an arbitrary waveform. The examples of the next chapter will illustrate the practical reality of these relationships. Since a computer can do little more than simple arithmetic on the input data, computer examples have a way of removing any reasonable question about the validity of theoretical developments.

CHAPTER III

THE DIGITAL FOURIER TRANSFORM ALGORITHM

3.0 INTRODUCTION

The DFT is a simple algorithm. It consists of stepping through the digitized data points of the input function, multiplying each point by sine and cosine functions as you go along, and summing the resulting products into accumulators (one for the sine component and another for the cosine). When we have processed every data point in this manner, we divide the accumulators (i.e. the sum-totals of the preceding process) by the number of data points. The resulting quantities are the average values for the sine and cosine components at the frequency being investigated as we described in the preceding chapter. We must repeat this process for all integer multiple frequencies up to the frequency that is equal to the sampling rate minus 1 (i.e. twice the Nyquest frequency minus 1), and the job is done.

In this chapter we will examine a program that performs the DFT. We will walk through this first program step by step, describing each operation explicitly.

52

Understanding the FFr

3.1 THE DFf COMPUTER PROGRAM

In the program presented below a "time domain n function is generated (16 data points) by summing together the first 8 harmonic components of the classic "triangle wave. tt This time domain data is stored in an array Y(n), and then analyzed as described above. In this program we use programming and data structuring features common to all higher level languages, viz. the data is stored in arrays and the execution of the program takes place via subroutines. Each subroutine works on the data arrays, performing a specific task. This allows the main body of the program (i.e. lines 20 through 80) to operate at a high level, executing the necessary tasks (i.e. the subroutines) in a logical order. Lets begin by looking at the whole program. As you can see, everything is controlled between lines 20 and 60.

6 REM ****************************************** 8 REM *** (OFT3.1) GENERATE/ANALYZE WAVEFORM *** 10 REM ******************************************

12 PI=3.141592653589793#:P2=2*PI:K1=PI/8:K2=1/PI 14 DIM Y(16),FC(16),FS(16),KC(16),KS(16)

16 CLS:FOR J=O TO 16:FC(J)=0:FS(J)=0:NEXT

20 GOSUB 108: REM - PRINT COLUMN HEADINGS

30 GOSUB 120: REM - GENERATE FUNCTION Y(X)

40 GOSUB 200: REM - PERFORM OFT

60 GOSUB 140: REM - PRINT OUT FINAL VALUES 70 PRINT:PRINT "MORE (Y/N)? ";

72 AS = INKEYS: I F AS="" THEN 72

74 PRINT AS: I F AS = nyll THEN 16

80 END

FFT/02 53

100 REM ******************************************

102 REM *

PROGRAM SUBROUTINES

*

104 REM ******************************************

106 REM * PRINT COLUMN HEADINGS *

107 REM ****************************************** 108 PRINT:PRINT

110 PRINT uFREQ F(COS) 112 PRINT

114 RETURN

118 REM ****************************** 120 REM *** GENERATE FUNCTION VeX) *** 121 REM ******************************

F(SIN)

Y(COS)

Y(SIN)"

122 FOR I = 0 TO 15:K3=I*K1

124 Y(I) = COS(K3)+COS(3*K3)/(9)+COS(5*K3)/(25)+COS(7*K3)/49 126 NEXT

128 FOR 1=1 TO 7 STEP 2: KC(I)=1/1~2:NEXT 130 RETURN

132 REM ******************************

138 REM * PRINT OUTPUT *

139 REM ****************************** 140 FOR z-o TO 15

142 PRINT Z;" ";

144 PRINT USING "##."'"11# ";FC(Z),FS(Z),KC(Z),KS(Z) 146 NEXT Z

148 RETURN

200 REM ************************** 202 REM * SOLVE FOR COMPONENTS * 204 REM **************************

206 FOR J=O TO 15: REM SOLVE EQNS FOR EACH FREQUENCY

208 FOR I = 0 TO 15:REM MULTIPLY AND SUM EACH DATA POINT

210 FC(J)=FC(J)+Y(I)*COS(J*I*K1):FS(J)=FS(J)+Y(I)*SIN(J*I*K1) 212 NEXT I

214 FC(J)=FC(J)/16: FS(J)=FS(J)/16:REM FIND MEAN VALUE 216 NEXT J

218 RETURN

Figure 3.1

54

Understanding the FFf

Now let's dissect this program and its routines to see how things really get done. At the beginning of the program (line 12) we define the frequently used constants of PI, 2 * PI, PI/8, and IIPI (we will duplicate each section of the program as we go along so that you don't have to flip pages). At line 14 we "DIMension" (i.e, define the size) of the arrays to be used in the program. Array Y(16) will store the 16 data points of the time domain function to be analyzed, while FC( 16) and FS( 16) will hold the 16 derived amplitudes of the Fourier cosine and sine components. Similarly,

6 REM ****************************************** 8 REM *** (oFT3.1) GENERATE/ANALYZE WAVEFORM *** 10 REM ****************************************** 12 PI=3.141592653589793#:P2=2*PI:K1=PI/8:K2=1/PI 14 DIM Y(16),FC(16),FS(16),KC(16),KS(16)

16 CLS:FOR J=O TO 16:FC(J)=0:FS(J)=0:NEXT

KC(l6) and KS(16) will hold the amplitudes of the sinusoids used to generate the input function (these are saved for comparison to the derived components). Having completed this preliminary work, line 16 clears the screen with a CLS statement, and then initializes the arrays FC(J) and FS(J) by placing a zero in every location. Note that the array proper is the FCO designation and that J only provides a convenient variable to specify the location within the array. We may use any variable (or constant) at any time to specify locations within arrays. The data stored at those locations will be unaffected.

This brings us to the main program (lines 20 through 60), which accomplishes the high level objectives. When the program

FFf/03

55

20 GOSUB 108: REM - PRINT COLUMN HEADINGS 30 GOSUB 120: REM - GENERATE FUNCTION Y(X) 40 GOSUB 200: REM - PERFORM OFT

60 GOSUB 140: REM - PRINT OUT FINAL VALUES 70 PRINT:PRINT "MORE (YIN)? ";

72 AS = INKEYS: IF AS="" THEN 72

74 PRINT AS: I F AS = "Y" THEN 16

80 END

comes to the GOSUB instruction at line 20 it will "jump down" to line 108, and so will we. This subroutine prints the column headings. In addition to printing out the amplitudes of the sine and

106 REM * PR I NT COLUMN HEAD I NGS *

108 PRINT:PRINT

110 PRINT "FREQ F(COS) F(SIN) Y(COS) Y(SIN)" 112 PRINT

114 RETURN

cosine components (as do most Fourier analysis programs), in this program we also print out the amplitude of the components which were used to generate the input function [i.e. Y(COS) Y(SIN)]. This allows a direct comparison of output to input and tells us how well the analysis scheme is working. Lines 108 through 112 print this heading and then, at line 114, we encounter a RETURN statement which sends program control back to the instruction

following the line 20 GOSUB 108 statement (i.e program execution jumps back to line 30).

Line 30 jumps us down to the subroutine located at line 120, which generates the input function. Line 120 is a REMark statement telling us this is where we generate the time domain input function Y(X), which we will do by summing the harmonic components known to construct a "triangle wave. tt At line 122 we set up a loop that steps "I" from 0 to 15 (the variable I will count

56

Understanding the FFr

120 REM *** GENERATE FUNCTION VeX) *** 122 FOR I = 0 TO 1S:K3=I*K1

124 Y(I) = COS(K3)+COS(3*K3)/(9)+COS(S*K3)/(2S)+COS(7*K3)/49 126 NEXT

128 FOR 1=1 TO 7 STEP 2: KC(I)=1/IA2:NEXT 130 RETURN

the 16 data points of our triangle wave function-note that K3 is computed each time through the loop (K 1 is defined back on line 12 as PII8). Line 124 is the business end of this routine, it sums the odd cosine components (with amplitudes inversely proportional to the square of their frequencies) into each point of array Y(I). Since there are 16 points in the data array we can have a maximum of 8 harmonic components (there must be a minimum of two data points for each "cycle" of the Nyquest frequency)', At line 126 the NEXT statement sends us back through the loop again, until we have stepped through the 2 * PI radians of a full cycle of the fundamental. At line 128 we have inserted a loop which puts l/N2 into the odd cosine terms of the KC(I) array (which is, in fact, the amplitudes of the sine waves we used to generate this function). Having done all this, we have completed the generation of our input function, and now RETURN (line 130) to the main program (i.e. to line 40).

40 GOSUB 200: REM· PERFORM OFT

We are now ready to perform a Fourier Transform of the time domain function in array Y(X). From line 40 we GOSUB to

I. Note that only 8 harmonics are used to generate this function (in fact that is all the Nyquest Sampling Theorem will allow), but there are 16 frequencies derived in the DFT. We will discuss this in detail later.

FFT/03

57

line 206 where we set up a loop. This loop will handle everything that must be done at each of the harmonic frequencies (in this case the frequency is designated by J). We must perform a multiplication by cosine and sine at each point of the data array (for the frequency being worked on) and sum the results into the location of the FC(J) and FS(J). Line 208 sets up a nested loop which will

200 REM ************************** 202 REM * SOLVE FOR COMPONENTS * 204 REM **************************

206 FOR J=O TO 15:REM SOLVE EQNS FOR EACH FREQUENCY

208 FOR I = 0 TO 15:REM MULTIPLY AND SUM EACH DATA POINT

210 FC(J)=FC(J)+Y(I)*COS(J*I*K1):FS(J)=FS(J)+Y(I)*SIN(J*I*K1) 212 NEXT I

214 FC(J)=FC(J)/16: FS(J)=FS(J)/16:REM FIND MEAN VALUE 216 NEXT J

218 RETURN

step I from 0 to 15. Note that, just as J indicates the frequency, I indicates the data point in the input function array. Line 210 sums into FC(J) the product of the data point at Y(I) multiplied by the COS(K 1 *1* J). We are multiplying the Ith data point by the Cosine of: Kl (i.e. PI/8) multiplied by I (which yields the number of radians along the fundamental that this data point lies) and then multiplied by the frequency of the component being extracted (i.e, J), which yields the correct number of radians for that particular harmonic. In this same line the "sine term" is also found and summed into FS(J). At line 212 we encounter the NEXT I statement, jump back to line 208 and repeat this operation for the next data point. When we have stepped through the 16 points of the data array, we move down to line 214 and divide both of these summations by 16 to obtain the average value. At line 216 we jump back to line 206 and perform the whole routine over for the

58

Understanding the FFf

next harmonic, We continue this process until we have analyzed all 16 frequencies (the constant, or "D.C." component, is occasionally referred to as the "zeroth" frequency).

60 GOSUB 140: REM - PRINT OUT FINAL VALUES

Having completed our Fourier analysis, we then return to line 60 where we jump down to the "PRINT OUTPUT" subroutine located at line 140. We set up a loop counter Z which counts from o to 15 (corresponding to the frequencies analyzed) and, in fact, at line 142, we print Z under the column heading "FREQ". Let's make note of a few things that happen here:

138 REM * PR I NT OUTPUT *

140 FOR Z=O TO 15

142 PRINT Z;II II;

144 PR I NT USING 11##.11/#11#1# II; FC(Z), FS(Z), KC(Z), KS(Z)

146 NEXT Z

148 RETURN

1) A semicolon separates the PRINT Z and the" ". This causes them both to be printed on the same line.

2) The" It; simply causes a space to be printed between the frequency column and the following data (note that another semicolon is used so that the next PRINT statement will still be printed on the same line).

Line 144 then prints out the relevant data with a PRINT USING statement. Line 146 causes the program to go back and print out the next line of data with a NEXT Z.

When the data for all 16 frequencies has been printed we return to the main program (line 70) and ask if "MORE (YIN)" is desired. Line 72 looks for an input from the keyboard and assigns

FFf/02

59

70 PRINT:PRINT "MORE (YIN)? "; 72 AS = I NKEYS: I F AS="" THEN 72 74 PRINT AS: I F AS = nyn THEN 16 80 END

the input to the variable A$. If no key is pressed, A$ will have nothing in it (i.e. A$ will equal 1111) and the instruction will be repeated. If A$ has any data in it at all, program execution passes down to line 74 where the data is printed and we check to see if A$ = "Y". If A$ equals "Y" then the execution jumps back to line 16 and we begin again; otherwise, execution passes on to line 80 which ends the program. For now this routine only provides a controlled ending of the program, but it will be used more meaningfully later.

3.2 PROGRAM EXECUTION AND PHENOMENA

In the exercises that follow we will test what we have done. The value of this section is subtle, but profound; all too often the student fails to grasp the practical significance and limitations of the subject he studies. 00 you know, for example, if the results of the OFT will be exact or only approximate? Perhaps theoretically exact but masked by "noise" sources (e.g. truncation errors)? The actual results may surprise you. The following exercises have been selected to be instructive in the practical usage of the OFT. Our purpose is to gain experience of the tool we use, as well as confidence in the software we write. Our purpose is to understand the OFT.

60

Understanding the FFf

3.2.1 PROGRAM EXECUTION

If we run the program created above we will obtain the results shown in Fig. 3.2 below. You will note that only cosine components were generated for the input function and, fortunately, only cosine components appear in the analysis; however, all of the results obtained by the analysis are one half the amplitudes of the input waveform, within the accuracy of the data printout. You will

FREQ

F(COS) F(SIN) Y(COS) Y(SIN)

0 0.0000 0.0000 0.0000 0.0000
1 0.5000 0.0000 1.0000 0.0000
2 0.0000 0.0000 0.0000 0.0000
3 0.0556 0.0000 0.1111 0.0000
4 0.0000 0.0000 0.0000 0.0000
5 0.0200 0.0000 0.0400 0.0000
6 0.0000 0.0000 0.0000 0.0000
7 0.0102 0.0000 0.0204 0.0000
8 0.0000 0.0000 0.0000 0.0000
9 0.0102 0.0000 0.0000 0.0000
10 0.0000 0.0000 0.0000 0.0000
11 0.0200 0.0000 0.0000 0.0000
12 0.0000 0.0000 0.0000 0.0000
13 0.0556 0.0000 0.0000 0.0000
14 0.0000 0.0000 0.0000 0.0000
15 0.5000 0.0000 0.0000 0.0000 Figure 3.2 - Fourier Transform Output

also note that, while only the first seven harmonics were created for the input function, components show up for al115 frequencies

FFf/02

61

of the analysis. Note that the components from 9 through 15 are a "mirror image" of the components from 1 through 7 (i.e. the two halves of the spectrum are symmetrical about the Nyquest frequency). The frequencies above the Nyquest are negative frequencies, and consequently are complex conjugates of the frequencies below the Nyquest (as shall be seen shortly).

3.3 DATA EXERCISES

The "triangle wave" used above is a relatively simple function, but it confirms that our OFT is working. We will give our OFT program a more complicated example shortly, but before we do that, let's consider another simple test. We will analyze a single sinusoid which has been shifted in phase by 67.5° from the reference cosine wave. To do this we change the GENERATE FUNCTION Y(X) subroutine as follows:

122 K4=3*PI/8:KC(1)=COS(K4):KS(1)=SIN(K4):REM SET K4=67.5° 124 FOR I = 0 TO 15:K3=I*K1

126 YO) = COS(K3+K4)

128 NEXT I

At line 122 we define K4 (i.e. we set K4 to 67.5° in radians), and place the cosine and sine of this angle into KC( 1) and KS( 1), which is the data we will use for comparison to the output. Lines 124 through 128 then generate a full cycle of a cosine wave shifted by 67.5°. When we perform this analysis we find that the OFT yields only sine and cosine components at the fundamental

62

Understanding the FFf

and its negative. This example simply illustrates that the program can extract the sine and cosine components of a waveform that has been generated as a single sinusoid.

In most of the practical applications of the DFT we will deal with considerably more complicated functions than those presented above. A more difficult test for our program would be to create a composite time domain wave composed of completely random harmonic components-if the program can analyze this wave successfully if can handle anything. To generate this test we take advantage of the computer's ability to generate pseudo random num bers and create a random pattern of sine and cosine amplitudes. We save these amplitudes in the arrays KC(I) and KS(I), and then use them to generate the time based function Y(X). This is accomplished by changing the GENERATE FUNCTION subroutine as follows:

122 FOR 1=0 TO 8:KC(I)=RND(1):KS(I)=RND(1):NEXT 124 FOR 1=0 TO 15:FOR J=O TO 8:K4=I*J*K1

126 Y(I)=Y(I)+KC(J)*COS(K4)+KS(J)*SIN(K4)

128 NEXT J :NEXT I

130 RETURN

Line 122 generates the random amplitudes of the components using the RND( 1) instruction. Lines 124 through 128 then create the points ofY(I) by summing in the contributions of the sinusoids which have those random amplitudes. For each data point in the time domain function (indicated by I) we step through the 0 through 8th harmonic component contribution (indicated by J).

FFf/02

63

Now when we run the program we obtain the following

results:

FREQ F(CDS) F(SIN) Y(CDS) Y(SIN)
0 0.12135 0.00000 0.12135 0.65186
1 0.43443 0.36488 0.86886 0.72976
2 0.39943 0.03685 0.79885 0.07369
3 0.24516 0.22726 0.49031 0.45451
4 0.05362 0.47526 0.10724 0.95051
5 0.35193 0.26593 0.70387 0.53186
6 0.48558 0.16407 0.97116 0.32093
7 0.47806 0.46726 0.95612 0.93451
8 0.53494 0.00000 0.53493 0.56442
9 0.47806 -0.46726 0.00000 0.00000
10 0.48558 -0.16407 0.00000 0.00000
11 0.35193 -0.26593 0.00000 0.00000
12 0.05362 -0.47526 0.00000 0.00000
13 0.24516 -0.22726 0.00000 0.00000
14 0.39943 -0.03685 0.00000 0.00000
15 0.43443 -0.36488 0.00000 0.00000 Figure 3.3 - Random Amplitude Function

We notice several things about this analysis immediately: 1. The sine components for the Oth term and 8th term are both zero, even though they were not zero in the input function. This is because no sine term can exist for either of these components! SIN(N*O) = 0 and SIN(N*PI) = O. Since we have multiplied through the zeroth term by the .frequency of zero, all of the sine terms will be zero in the analysis-they will also be zero in the input function for the same reason. Likewise, there can be no sine term for the Nyquest frequency. Even though we assigned values to these components in our generation of the wave, they were never created in the time domain function simply because such components cannot be created.

2. The cosine amplitude of the zeroth and 8th frequency components are not half of the input function amplitudes. Now,

64

Understanding the FIT

we showed in the last chapter that the derived amplitudes would all be half of the actual component amplitudes, so what is going on here? The Oth term represents the D.C. component (average value) as explained in chapter 1, and all of the terms are simply multiplied by the cos(O) = 1. This is reasonably apparent and, in fact, was what we should have expected; but, it may not have been apparent that the argument for the cosine term at the Nyquest frequency would always be 0 or N*PI, always yielding a cosine value of + I. Any cosine component in the input will be rectified, yielding an average value in the analysis equal to the peak value of that component.

3. Note that the sine components for all of the frequencies above the Nyquest are negative. This negation of the sine com ponent comes about because the frequencies above the Nyquest are mathematically negative frequencies (as we noted earlier), and a negative frequency produces the complex conjugate of its positive frequency counterpart (i.e. the sine component of a complex frequency is negated but the cosine component remains unchanged).

If you are already familiar with Fourier Analysis the above observations should come as no surprise; still, it is interesting to see that practical results agree with the theory.

Let's change the GENERATE FUNCTION subroutine to illustrate one last important point: we will use a linear equation to generate a "perfect" triangle wave. We already know that the terms attenuate as IIN2 in a triangle wave and that only the odd numbered terms are present-we have just analyzed a seven component approximation of this function. It would seem reasonable that we obtain similar results with a "straight line" version of this function. In the routine shown below we use a "scale factor" of pe/8 to provide the same amplitudes as the components used in our synthesized version (i.e. the fundamental has an amplitude of 1.0 and the harmonics all "roll off" as IIN2). The final GENERATE FUNCTION subroutine will be:

FFf/03

6S

122 K2=(PI*PI)/8:K3=K2/4

124 FOR 1=0 TO 7:Y(I)=K2-K3*I:NEXT I 126 FOR 1=8 TO 15:Y(I)=K3*1-3*K2:NEXT I 128 RETURN

We run the program and obtain the results shown in Fig. 3.4. First of all we notice that there are no harmonic amplitudes given for the input function; there shouldn't be any, of course, because we didn't generate the function that way. Of considerably

FREQ F(COS) F(SIN) Y(COS) Y(SIN)
0 0.00000 0 • 0000 0.0000 0.00000
1 o . 50648 0.0000 0.0000 0.00000
2 0.00000 0 • 0000 0.0000 0.00000
3 0.06245 0.0000 0.0000 0.00000
4 0.00000 0.0000 0.0000 0.00000
5 0.02788 0.0000 0.0000 0.00000
6 o • 00000 O. 0000 0.0000 0.00000
7 0.02004 0 .0000 0.0000 0.00000
8 o . 00000 0 • 0000 0.0000 0.00000
9 o . 02004 0 • 0000 0.0000 0.00000
10 0.00000 0.0000 0.0000 0.00000
11 0.02788 0.0000 0.0000 0.00000
12 0.00000 0.0000 0.0000 0.00000
13 0.06245 0.0000 0.0000 0.00000
14 O. 00000 o. 0000 0.0000 0.00000
15 o . 50648 0.0000 0.0000 0.00000 Figure 3.4 - Analysis of a "Perfect" Triangle Wave

more im portance is the fact that the components don't match the amplitudes that we said they would (compare them to the values derived back in Figure 3.2). Not only do they not have the correct values, they don't even have the correct 1/N2 ratios! This is completely wrong! Is something wrong with our program?

Even though it is customary to throw up the hands when results such as these are obtained (not a completely infrequent occurrence), and proclaim the OFT useless for "real work," it will actually be best if we can remain calm for a few minutes and

66

Understanding the FFf

examine what has happened here. First of all, there is nothing wrong with the computer program. The program is telling us exactly what it should be telling us. There is nothing wrong with the equations we used to generate the function and there is nothing wrong with the DFT we are using. The thing that is "wrong" is that we have just experienced the effects of aliasing! We generated the above triangle wave deliberately so that it would include the higher order harmonic components (even though we know the maximum Nyquest frequency is "8" for a data base of only 16 data points. All of the harmonics above that frequency have been "folded back" into the spectrum we are analyzing and have given us "incorrect" values. If we want to generate a function as we did in this example, and have it agree with the known harmonic analysis of the "classic" waveshape, we mustfilter off the harmonics above the nyquest before we attempt to digitize it.

The mechanics of the aliasing phenomenon are very interesting to delve into, but our concern here is with the FFT, and so we will resist the urge to dig deeper. We have gone through this exercise because it is the sort of thing that happens in the practical application ofDFTIFFT routines. Systems are improperly designed (or improperly applied) and then, later, no one can understand why the results are invalid. The DFT algorithm is indeed a simple program, but there are a great many "traps" lurking for the unwary. While it is relatively easy to explain how the FFT works and how to write FFT programs, there is no alternative to studying the DFT, and FFT, and all of the associated engineering disciplines, in detail. Like Geometry (and, for that matter, most other things of value }-for this subject "There is no royal road ... "

CHAPTER IV

THE INVERSE TRANSFORM AND COMPLEX VARIABLES

4.1 RECONSTRUCTION

The inverse transform is, intuitively, a very simple operation. We know what the amplitudes of the sinusoids are (from the forward transform), so we simply reconstruct all of these sinusoids and sum them together. Nothing could be simpler.

We note that the process of extracting the individual frequency components yielded only half amplitude values for all but the constant term and the Nyquest frequency term; however, we extracted components for both the negative and positive frequencies (i.e. both above and below the Nyquest). This all works out neatly in the reconstruction process since it will provide precisely the correct amplitudes when both negative and positive frequency terms are summed in. Before we develop this discussion further let's write an inverse transform routine and incorporate it into the OFT program of the preceding chapters.

68 Understanding the FFI'

6 REM ******************************************* 8 REM ** (DFT4.1) ANALYZE/RECONSTRUCT WAVEFORM ** 10 REM ******************************************* 11 REM *** DEFINE CONSTANTS

12 PI=3.141592653589793#:P2=2*PI:K1=PI/8:K2=1/PI 13 REM *** DIMENSION ARRAYS

14 DIM Y(16),FC(16),FS(16),KC(16),KS(16),Z(16) 15 REM *** INITIALIZE FOURIER COEFFICIENT ARRAYS 16 CLS:FOR J=O TO 16:FC(J)=0:FS(J)=0:NEXT

20 GOSUB 108: REM * PRINT COLUMN HEADINGS

30 GOSUB 120: REM * GENERATE FUNCTION Y(X)

40 GOSUB 200: REM * PERFORM DFT

60 GOSUB 140: REM * PRINT OUT FINAL VALUES

69 REM *** ASK IF RECONSTRUCTION IS NECESSARY 70 PRINT:PRINT IIRECONSTRUCT (Y/N)? II;

72 AS = I NKEYS: IF AS=IIII THEN 72

74 PRI NT AS: I F- AS = lIylI THEN 80

76 END

80 CLS:GOSUB 220:REM * RECONSTRUCT 82 GOSUB 240:REM * PRINT OUTPUT

84 PRINT:PRINT IIMORE (Y/N)?";

86 AS = INKEYS: I F AS = 1111 THEN 86 88 PR INT AS: I F AS = nyll THEN 15 90 GOTO 76

100 REM ******************************************

102 REM *

PROGRAM SUBROUTINES

*

104 REM ******************************************

106 REM * PR I NT COLUMN HEAD I NGS *

108 PRJ NT:PRINT

109 REM *** Y(COS) AND Y(SIN)=INPUT COMPONENT AMPLITUDES

110 PRINT IIFREQ F(COS) F(SIN) Y(COS) Y(SIN)II

112 PRINT

114 RETURN

118 REM ****************************** 120 REM *** GENERATE FUNCTION F(X) ***

122 FOR I = 0 TO 15:K3=I*K1:REM I=DATA POINT LOCATION IN ARRAY 123 REM *** SET Y(I)=FIRST 8 COMPONENTS OF TRIANGLE WAVE

124 Y(I) = COS(K3)+COS(3*K3)/(9)+COS(5*K3)/(25)+COS(7*K3)/49 126 NEXT

127 REM *** STORE COMPONENT AMPL nUDES 128 FOR 1=1 TO 7 STEP 2: KC(I)=1/IA2:NEXT 130 RETURN

FFf/04

132 REM ******************************

138 REM * PR I NT OOTPUT *

140 FOR Z-O TO 15

142 PRINT z;n u; :REM * Z=COMPONENT FREQUENCY

144 PRINT USING u##.III1I1#II_ U;FC(Z),FS(Z),KC(Z),KS(Z) 146 NEXT Z

148 RETURN

200 REM ************************** 202 REM * SOLVE FOR COMPONENTS *

206 FOR J=O TO 15:REM * SOLVE EQNS FOR EACH FREQUENCY

208 FOR I - 0 TO 15:REM * MULTIPLY AND SUM EACH DATA POINT

210 FC(J)=FC(J)+Y(I)*COS(J*I*K1):FS(J)=FS(J)+Y(I)*SIN(J*I*K1) 212 NEXT 1

214 FC(J)=FC(J)/16: FS(J)=FS(J)/16:REM * FIND MEAN VALUE 216 NEXT J

218 RETURN

220 REM **************************

222 REM *

RECONSTRUCT

*

224 REM **************************

226 FOR J=O TO 15:REM * RECONSTRUCT EACH FREQUENCY 228 FOR I = 0 TO 15: REM * RECONSTRUCT EACH OATA POINT 230 Z(I)=Z(I)+FC(J)*COS(J*I*K1)+FS(J)*SIN(J*I*K1) 232 NEXT I

234 NEXT J

236 RETURN

240 REM ******************************

241 REM *

PRINT OOTPUT

*

240 REM ******************************

243 REM * yel) EQUALS INPUT FUNCTION FOR COMPARISON

244 CLS:PRINT:PRINT UT zel) Yel)u:PRINT:PRINT

245 FOR Z=O TO 15

246 PRINT Zin u;

248 PRINT USING u##.III1I1#1I U;ZeZ),Y(Z)

250 NEXT Z

252 RETURN

Figure 4.1

69

70

Understanding the FFT

The first part of this program is apparently unchanged from the program of the preceding chapter, except that line 14 defines a new array Z( 16). This array will hold the reconstructed input function. At line 70 we change the question asked to the following: "RECONSTRUCT (YIN)", If the answer is "Y" then we pass on to line 80, where we begin the operation of reconstruction.

As in the preceding program, we use subroutines to simplify the operation. At line 80 we jump down to line 220 where the operation of reconstruction is performed, At line 82 we print out the results. Line 84 asks if we want "MORE ?". A "Y" returns us to line 16-anything else ends the program. Let's now look at the inverse transform routine:

226 FOR J=O TO 15:REM * RECONSTRUCT EACH FREQUENCY 228 FOR I = 0 TO 15: REM * RECONSTRUCT EACH DATA POINT 230 Z(I)=Z(I)+FC(J)*COS(J*I*K1)+FS(J)*SIN(J*I*K1) 232 NEXT I

234 NEXT J

236 RETURN

At line 226 we set up a loop to count from 0 to 15 (i.e. count the frequency components used in the reconstruction). At line 228 we set up a nested loop to count through the 16 data points of the reconstruction. At line 230 we sum into the array Z(I) the contribution of the Jth frequency component at the data point I (both cosine and sine components). As pointed out above we sum in all of the frequency components, i.e. the contribution from both the positive and negative frequency components, the constant tenn, and the Nyquest frequency tenn. It's that simple.

The print routine for the reconstructed function Z(Z) (as well as the input time domain function Y(Z» is located at line 240.

FFT/04

71

243 REM * V(I) EQUALS INPUT FUNCTION FOR COMPARISON

244 ClS:PRINT:PRINT «r Z(I) V(I )II:PRINT:PRINT

245 FOR Z=O TO 15

246 PRINT Z;" ";

248 PRINT USING 11##.##### ";Z(Z),V(Z)

250 NEXT Z

At line 244 we clear the screen and print the new heading. At lines 245 through 250 we print the reconstruction as well as the input data.

Ifwe run this program we will obtain the following output:

T Z( I) V(I)
0 1.17152 1.17152
1 0.93224 0.93224
2 0.61469 0.61469
3 0.30918 0.30918
4 -0.00000 -0.00000
5 -0.30918 -0.30918
6 -0.61469 -0.61469
7 -0.93224 -0.93224
8 -1.17152 -1.17152
9 -0.93224 -0.93224
10 -0.61469 -0.61469
11 -0.30918 -0.30918
12 -0.00000 -0.00000
13 0.30918 0.30918
14 0.61468 0.61469
15 0.93224 0.93224 Figure 4.2

72

Understanding the FFT

4. 2 TRANSFORM SYMMETRY AND COMPLEX VARIABLES

All of the above is beautifully simple; unfortunately, the purist will never let us leave things that way. While the above is certainly not incorrect, it is slightly "out of bed" with the formal definition of the DFT. Actually, the complications are not all that difficult; we need only reformulate everything in terms of complex variables. Let's look at the problem:

The definitions of the DFT and Inverse DFT are:

N-I

F(t) = lIN L f(t) WN-ff T=O

(4.1 )

N-I

f(T) = L F(t) WNTf f=0

(4.2)

Where:

F(t) = frequency components or transform f(T) = time base data points or inverse xfonn N = number of data points

T = discrete times

f = discrete frequencies

W N = ei2ltIN = Cos(21t1N) + i S in(21tIN)

FFf/04

73

There is marked symmetry between eqns. (4.1) and (4.2), but the algorithms given in DFT4.1 for the transform and inverse transform fail to reflect that symmetry. The inverse transform starts from complex quantities in the frequency domain while we use only real numbers for the input function in the forward transform. Now, in the general case, both the frequency domain and the time domain may be complex numbers, of course, and when we provide for this potential, the symmetry between the forward and inverse transforms immediately becomes apparent. Let's look at what happens to our transform algorithm when we consider complex variables:

Multiplication of two complex quantities yields the following terms:

(A +iB)(C +iO) = AC +iAO +iBC - BO

= (AC-BO)+i(AD+BC) ---------- (4.3)

We might note that (A +iB) is the input function, and (C +iD) is equal to W N = ei2nIN = Cos(21t1N) + i Sin(21t1N). Incorporating this into program DFT4.1, we convert the forward transform algorithm:

210 FC(J) = FC(J)+YR(I)*COS(J*I*K1)-YI(I)*SIN(J*I*K1) 211 FS(J) = FS(J)+YR(I)*SIN(J*I*K1)+YI(I)*COS(J*I*K1)

where YR stands for the real part of the input function and YI stands for the imaginary part (this obviously requires defining new arrays, YR(16) and YI(16), at program initialization). Similarly, recognizing that an imaginary term must be created in the inverse

74

Understanding the FFT

transform (as defined in eqn. 4.2), the reconstruction algorithm becomes:

230 ZR(I) = ZR(I)+FC(J)*COS(J*I*K1)+FS(J)*SIN(J*I*K1) 231 ZI(I) = ZI(I)-FC(J)*SIN(J*I*K1)+FS(J)*COS(J*I*K1)

The symmetry is now much more apparent. Except for the sign changes, these lines of code are identical. With a little manipulation we can use the same routine for both forward and inverse transformation,

Very well then, we will write a new OFT program with the above considerations incorporated. It will require completely revamping the data structures and even the basic flow of the program, but it will be formally correct. For us, at this point, it illustrates a significant characteristic of the DFTlInverse OFT. While we are at it we should change the program to a menu driven format as it will be much better suited to our work in the following chapters. The new program (OFT4.2) is shown on the following pages. We must make note of the changes:

1. The data arrays have changed completely. At line 14 we now dimension four arrays-C(2, 16), S(2, 16), KC(2, 16), and

14 DIM C(2,16),S(2,16),KC(2,16),KS(2,16)

KS(2,16). These are "two dimensional" arrays, if you will. There are two columns of 16 data points in each array. From now on we will put the time domain data in column 1 and the frequency domain data in column 2 of each of these arrays. KC(2, 16) and

FFf/04

6 REM ****************************************** 8 REM *** (DFT4.2) GENERATE/ANALYZE WAVEFORM *** 10 REM ****************************************** 12 PI=3.141592653589793#:P2=2*PI:K1=PI/8:K2=1/PI 14 DIM C(2,16),S(2,16),KC(2,16),KS(2,16)

16 CLS:FOR J=O TO 16:FOR 1=1 TO 2:C(I,J)=0:S(I,J)=0:NEXT:NEXT

19 REM

*******************

20 CLS:REM * MAIN MENU *

21 REM *******************

22 PRINT:PRINT:PRINT II

MAIN MENUII:PRINT

24 PRINT II 1 = GENERATE FUNCTION":PRINT 26 PR I NT II 2 = TRANSFORM FUNCTI ON II : PR I NT 28 PRINT II 3 = INVERSE TRANSFORM":PRINT 30 PRINT II 4 = EXIT":PRINT:PRINT

32 PRINT SPC(10);"MAKE SELECTIONII;

34 AS = INKEYS: I F AS="" THEN 34

36 A=VAL(AS):ON A GOSUB 300,40,80,1000 38 GOTO 20

39 REM ***************************** 40 REM * FORWARD TRANSFORM ROUTINE * 41 REM *****************************

42 CLS:N=1:M=2:K5=16:K6=-1:GOSUB 108 44 FOR J=O TO 16:C(2,J)=0:S(2,J)=0:NEXT 45 GOSUB 200: REM - PERFORM OFT

46 GOSUB 140: REM - PRINT OUT FINAL VALUES 48 PRINT: INPUT IIC/R TO CONTINUE";AS

50 RETURN

79 REM ************************* 80 REM * INVERSE TRANSFORM * 81 REM *************************

82 CLS:FOR 1=0 TO 15:C(1,1)=0:S(1,1)=0:NEXT

84 N=2:M=1:K5=1:K6=1:GOSUB 200:REM RECONSTRUCT INPUT 85 GOSUB 150:REM PRINT HEADING

86 GOSUB 140:REM PRINT OUTPUT

88 PR I NT: INPUT IIC /R TO CON TI NUP'; AS 90 RETURN

75

76 Understanding the FIT

100 REM ******************************************

102 REM *

PROGRAM SUBROUTINES

*

104 REM ******************************************

106 REM * PRINT COLUMN HEADINGS *

108 PRINT:PRINT

110 PRINT "FREQ F(COS) 112 PRINT

114 RETURN

f(SIN)

Y(COS)

Y(SIN)"

137 REM ******************************

138 REM *

PRINT OUTPUT

*

139 REM ****************************** 140 FOR Z=O TO 15

142 PRINT Zi" IIi

144 PRINT USI NG 11##.##### "i C(M, Z), SCM, Z), KC(M, Z), KS(M, Z)

146 NEXT Z

148 RETURN

150 REM ****************************** 152 REM * PRINT COLUMN HEADINGS * 154 PRINT:PRINT

156 PRINT II T 158 PRINT 160 RETURN

RECONSTRUCTION

INPUT FUNCTION"

200 REM *******************************

202 REM *

TRANSFORM/RECONSTRUCT *

204 REM *******************************

206 FOR J=O TO 15:REM SOLVE EQNS FOR EACH FREQUENCY 208 FOR 1=0 TO 15:REM MULTIPLY AND SUM EACH POINT

210 C(M,J)=C(M,J)+C(N,I)*COS(J*I*K1)+K6*S(N,I)*SIN(J*I*K1) 211 S(M,J)=S(M,J)-K6*C(N,I)*SIN(J*I*K1)+S(N,I)*COS(J*I*K1) 212 NEXT I

214 C(M,J)=C(M,J)/K5:S(M,J)=S(M,J)/K5:REM SCALE RESULTS 216 NEXT J

218 RETURN

299 REM

***********************

300 CLS: REM * FUNCT I ON MENU * 301 REM *********************** 302 FOR 1=0 TO 15:C(1,1)=0:S(1,1)=0

303 FOR J=1 TO 2:KC(J,I)=0:KS(J,I)=0:NEXT:NEXT

FFT/04

304 PRI NT: PRI NT: PRINT ..

FUNCTION MENUII:PRINT

306 PRINT" 1 = TRIANGLE WAVEu:PRINT 308 PRINT II 2 = CIRCLEII:PRINT

310 PRINT II 3 = ELLIPSE 111:PRINT

312 PRINT" 4 = ELLIPSE 211:PRINT:PRINT 320 PRINT SPC(10);IIMAKE SELECTIONI"; 322 AS = I NKEYS: IF AS=II .. THEN 322

326 A=VAL(AS):ON A GOSUB 330,340,350,360,1000 328 RETURN

330 REM *** GENERATE FUNCTION F(X) *** 332 FOR I = 0 TO 15:K3=I*K1

334 C(1,1) = COS(K3)+COS(3*K3)/9+COS(5*K3)/25+COS(7*K3)/49 335 KC(1,1)=C(1,1)

336 NEXT

338 FOR 1=1 TO 7 STEP 2:KC(2,1)=1/IA2:NEXT 339 RETURN

340 REM *** GENERATE CIRCLE *** 342 FOR I = 0 TO 15:K3=I*K1

344 C(1,1) = SIN(K3):S(1,I)=COS(K3) 345 KC(1,1)=C(1,1):KS(1,1)=S(1,1) 346 NEXT

348 KS(2,1)=1 349 RETURN

350 REM *** GENERATE ELLIPSE 1 *** 352 FOR I = 0 TO 15:K3=I*K1

354 C(1,1) = SIN(K3):S(1,1)=2*COS(K3) 355 KC(1,I)=C(1,1):KS(1,1)=S(1,1) 356 NEXT

358 KS(2,1)=1.5:KS(2,15)=.5 359 RETURN

360 REM *** GENERATE ELLIPSE 2 *** 362 FOR I = 0 TO 15:K3=I*K1

364 C(1,I) = COS(K3):S(1,1)=2*SIN(K3) 365 KC(1,1)=C(1,1):KS(1,1)=S(1,I) 366 NEXT

368 KC(2,1)=-.5:KC(2,15)=1.5 369 RETURN

1000 STOP

Figure 4.3

7'1

78

Understanding the FFf

KS(2, 16} are not needed for a "working" program, but we use them here to save zhe input functions which we have generated. We use these later for comparison with the transform and inverse transform. Again, the first column stores the time domain data and the second stores the frequency domain.

2. The program is menu driven. Lines 20 through 30 print the menu. Lines 32 and 34 determine what the selection is and

20 CLS:REM * MAIN MENU *

21 REM *******************

22 PRINT:PRINT:PRINT u

MAIN MENU":PRINT

24 PRINT II 1 = GENERATE FUNCTION":PRINT 26 PRINT II 2 = TRANSFORM FUNCTIONU:PRINT 28 PRINT II 3 = INVERSE TRANSFORMII:PRINT 30 PRINT II 4 = EXITII:PRINT:PRINT

32 PRINT SPC(10);IIMAKE SELECTIONII;

34 AS = I NKEYS: I F AS= .. •• THEN 34

36 A=VAL(AS):ON A GOSUB 300,40,80,1000 38 GOTO 20

jump to the appropriate subroutine. The "generate function" is still located at line 120. The transform and inverse transform routines are now located at lines 40 and 80 respectively.

3. The "transform routine" is now located at line 40.

Since we now store the time and frequency data in the same array, and since the forward and inverse transforms are performed by the same sub-routine, we must set up "pointers" so that the transform routine will know which way to operate on the data. We do this by using M and N as the pointers. N points to the "input function" and M points to the output function-ifN=1 and M=2 (according to the statement above that I was time domain and 2 was frequency domain data) then we will perform a "Forward Transform." If

FFT/04

79

N=2 and M= 1 we will perform the "Inverse Transform." In either case, we must have already created the input function (i.e. we must use the generate function option from the Main Menu) before we can perfonn a forward transform, and we must have performed a forward transform before we perform an inverse transform. From these conditions it is apparent what we must do to perform a forward transform: we clear the screen at line 40, set N=I, M=2, and then jump to line 108 to print the heading for the output. You will also note that we have set the constant K5=16. This constant is used to find the average value of each transformed component as we did in line 214 of OFT4.1 (Fig. 4.1). Since we do not need to make this division in the inverse transform, we set K5=1 for that operation (see inverse transform of Fig. 4.3 above).

At line 42 we clear the frequency domain arrays (i.e.

C(2, 16) and 8(2,16» before performing the transform. At line 44 we then jump down to line 200 where we perform the OFT on the time domain data. After performing the OFT we return to line 46, where we then jump to line 140 and print the results. Line 48 is only a programming technique for waiting until the user is through examining the data before returning to the main menu.

4. The "inverse transform" starts at line 80 and follows the same pattern as the forward transform.

5. As we noted above, the transform routine starts at line 200. It is similar to the transform routines used previously except now we include the possibility oftransfonning complex numbers.

80

Understanding the FFT

4.3 PROGRAM OPERATIONIEXAMPLES

Ifwe run this program for the familiar triangle wave of our past examples we will obtain the same data that we obtained previously (Fig 4.2) except that now zeros will be printed in the column for the imaginary part of the input function (i.e. we still create only the real part of the time domain function).

While this works, of course, we might want to check this program for some function which actually has complex numbers for the input. The GENERATE FUNCTION subroutine presents a second menu which offers a selection of functions. We may take sixteen points on the circumference of a circle as an example--or perhaps the example of an ellipse would be more interesting. If you have not worked with the Fourier Transform of complex variable inputs the results of these examples might prove interest-

.

mg.

We are not primarily concerned with the transform of complex variables in this short book, and so we will complete our review of the DFT here. It is interesting to note (in connection with complex variables) that we may actually take the transform of two real valued functions simultaneously by placing one in the real part of the input array and the other in the imaginary part. By relatively simple manipulation of the output each of the individual spectrums may be extracted (see, for example, Fast Fourier Transforms, Chapter 3.5, by r.s. Walker, CRC PRESS).

PARfll THEFFT

82

Understanding the FFf

CHAPTER V

FOUR FUNDAMENTAL THEOREMS

5.0 INTRODUCTION

Our development of the FFT (in chapter 7) will be based on these four theorems. Their validity is not our concern here (proofs are relegated to appendix 5.3); rather, we need only understand their function. We will illustrate these theorems via real examples using the OFT program developed in the previous chapters. This OFT program has necessarily been expanded for these illustrations and is listed in appendices 5.1 and 5.2.

This material is easily grasped, but that does not diminish its importance-its comprehension is imperative. These theorems are the key to understanding the FFT, and consequently, this chapter is dedicated solely to walking through each of these illustrations step by step. The best approach might be to run each illustration on your computer while reading the accompanying text.

5.1 THE SIMILARITY THEOREM

The Similarity Theorem might better be called "the reciprocity theorem" for it states: "As the time domain function

84

Understanding the FFT

expands in time, the frequency domain function compresses in spectrum, and increases in amplitude." The input function for program DFT5.01 is a half cycle of Sirr'(x) centered in the middle of the time domain. The program requests a "width" (actually the half-width) which specifies the number of data points over which the input function is to be spread. According to Similarity then, the spectrum of the frequency domain will expand and compress in inverse proportion to the specified width of the time domain function. The amplitude of the time domain function is held constant (peak amplitude of32) so that, in keeping with Similarity,

T T
0 +0.00000 +0.00000 16 +32.00000 +0.00000
1 +0.00000 +0.00000 17 +0.00000 +0.00000
2 +0.00000 +0.00000 18 +0.00000 +0.00000
3 +0.00000 +0.00000 19 +0.00000 +0.00000
4 +0.00000 +0.00000 20 +0.00000 +0.00000
5 +0.00000 +0.00000 21 +0.00000 +0.00000
6 +0.00000 +0.00000 22 +0.00000 +0.00000
7 +0.00000 +0.00000 23 +0.00000 +0.00000
8 +0.00000 +0.00000 24 +0.00000 +0.00000
9 +0.00000 +0.00000 25 +0.00000 +0.00000
10 +0.00000 +0.00000 26 +0.00000 +0.00000
11 +0.00000 +0.00000 27 +0.00000 +0.00000
12 +0.00000 +0.00000 28 +0.00000 +0.00000
13 +0.00000 +0.00000 29 +0.00000 +0.00000
14 +0.00000 +0.00000 30 +0.00000 +0.00000
15 +0.00000 +0.00000 31 +0.00000 +0.00000 Fig. 5.1 - Time Domain for Width = 1

the spectrum amplitude will be proportional to the width! In this example the amplitude of the output will vary between 1.0 and 16

FFT/05

85

(for the range of widths allowed-also 1 through 16). Repeating the example with various widths illustrates Similarity.

Run DFT5.01 and specify a width of 1. A single data point will be generated as the input function (we reproduce the computer screen in fig. 5.1 on the previous page). The spectrum for this input is "flat" (i.e. a series of components alternating between + 1 and -1 as shown in figure 5.2 below). A graph of both frequency and time domain functions is given in figure 5.3. Note that in the graphical display only the magnitude of the frequency domain data is displayed.

FREQ F(COS) F(SIN) FREQ F(COS) F(SIN)
0 +1.00000 +0.00000 16 +1.00000 +0.00000
1 -1.00000 -0.00000 17 -1.00000 -0.00000
2 +1.00000 +0.00000 18 +1.00000 +0.00000
3 -1.00000 -0.00000 19 -1.00000 -0.00000
4 +1.00000 +0.00000 20 +1.00000 +0.00000
5 -1.00000 -0.00000 21 -1.00000 -0.00000
6 +1.00000 +0.00000 22 +1.00000 +0.00000
7 -1.00000 -0.00000 23 -1.00000 -0.00000
8 +1.00000 +0.00000 24 +1.00000 +0.00000
9 -1.00000 -0.00000 25 -1.00000 -0.00000
10 +1.00000 +0.00000 26 +1.00000 +0.00000
11 -1.00000 -0.00000 27 -1.00000 -0.00000
12 +1.00000 +0.00000 28 +1.00000 +0.00000
13 -1.00000 -0.00000 29 -1.00000 -0.00000
14 +1.00000 +0.00000 30 +1.00000 +0.00000
15 -1.00000 -0.00000 31 -1.00000 -0.00000
Fig 5.2 - Spectrum for Time Domain Width = 1 This single example doesn't say anything about the Similarity Theorem of course for the relationship concerns the expansion and compression of the function. Repeat the exercise

86

32

Understanding the FFf



16

. .

Time Domain Waveshape

Frequency Domain

Fig. 5.3 - Similarity Test Width = 1

but this time select a width of 2. The graphical results are shown in Fig. 5.4 below. This display shows the "expanded" time domain function which now has a maximum value at the 16th data point (amplitude = 32) with two additional data points (amplitudes = 16) on either side. The spectrum is still a series of data points which alternate in sign, but now the amplitude of the frequency components diminish as the frequency increases. At a frequency of 8 the amplitude is 0.5, and from there the components "roll off' to negligible amplitudes at the higher frequencies.

Continue the experiment by repeating the Similarity Test with widths of 4, 8 and 16. The frequency spectrum shrinks and the amplitude increases as the time domain function expands (see figures 5.5 through 5.7).

FFT/05

32

32

. .

Time Domain Waveshape

87

16

•••••

• • • • •

Frequency Domain

Fig. 5.4 - Similarity Test Width = 2

. .







Time Domain Waveshape

16









Frequency Domain

Fig. 5.5 - Similarity Test Width = 4

88

32

32

Understanding the FFT

• •

16



• •

• •

• •



• •









Time Domain Waveshape Frequency Domain

Fig 5.6 - Similarity Test Width = 8

••• • •

16



• •

• •

• •

• •





• •





• •









• •

Time Domain Waveshape

Frequency Domain

Fig. 5.7 - Similarity Test Width = 16

FFf/05

89

This phenomenon is the relationship known as Similarity.

It is understood, of course, that "similarity" is completely general (i.e. it works for any input function) and also bilateral; if we compress the spectrum of a function its time domain will be expanded and simultaneously decreased in amplitude. This relationship is indeed simple, but not insignificant nor trivial. In fact, it is perhaps the most fundamental relationship that exists between the frequency and time domains. The essence of this relationship is this: Faster transitions and shorter durations require (imply) higher frequencies, and slower transitions and longer durations require (imply) lower frequencies. If your tape recorder runs too fast, everyone sounds like Chip and Dale; if it runs too slow they sound like Lurch. It's a relationship that's inevitable--still, perhaps, not completely inescapable.

5.2 THE ADDITION THEOREM

This theorem states that the transform of the sum of two functions is equal to the sum of the transforms of the two functions individually:

Xform {fl(x)+t2(x)} = Xform {fl(x)} + Xform {t2(x)} ---- (5.1)

This is the result of the system being linear of course, and consequently, may not seem remarkable. On the other hand, it allows a certain amount of manipulation that is worth illustrating.

The example selected for DFTS.02 concerns "rising" and

You might also like