0% found this document useful (0 votes)
107 views57 pages

Electromagnetic Theory Overview

EM Lecture

Uploaded by

tpathan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views57 pages

Electromagnetic Theory Overview

EM Lecture

Uploaded by

tpathan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Electromagnetic Theory

EE 60548-01/-02

Gregory Timp
EE/Biological Science
University of Notre Dame
Instructor:
Gregory Timp
Office: 316 Stinson-Remick/50 Galvin
Telephone: 631-1272
Email: gtimp@[Link]
Website: [Link]
Office hours: by appointment

Textbook:
Class notes (required). Available on-line.

Course Website:
The CANVAS EE 60548 web-site can be found at the address:
[Link] The web-site will be utilized for
posting lectures, homework assignments, literature, etc.—all of the course
materials. Make sure to check your e-mail frequently for announcements
pertinent to homework problems as well as other
posted materials.
On-line, synchronous lectures:
The class is scheduled to meet in Stinson-Remick Hall, Room 100, MW 9:30 -
10:45 am. However, if a student should become incapacitated, it is still possible to
participate in class virtually and complete all assignments and assessments via
ZOOM. To join the ZOOM meeting click the URL below to download the app and
start/join: Gregory Timp's Personal Meeting Room:
[Link] To avoid spamming, there is a waiting
room to screen the participants so you will have to WAIT for admission prior to
class.
The salient features of the grading policy are :
Grading Policy:
1. Homework 20 % of total
2. Midterm Exam 20 % of total
3. Final Exam 40 % of total
4. Class Participation 20 % of total
Homework: Up to 10 (likely 5-6) homework problem sets will be assigned during the
course of the semester. Homework is a team effort. You will be allowed to work in
teams on each homework assignment—a team will be assigned to you on a rotating
basis. Only the team homework should be submitted for grading.
Your homework grade will be based, in part, on your team performance evident in
the homework write-up/presentation and, in part, on the team member evaluation
forms. If the homework is not completed on-time, NO GRADE will be given for that
homework and the team receives a zero. The homework sets are due at the
beginning of the class.

Exams: The examinations are ONE HOUR+ in duration and administered during the
regular class meeting time. The exams usually consist of problems derived from the
homework, class work or assigned reading. The exam score is an individual
score. Thus, you are encouraged to ATTEND CLASS and DO ALL OF THE
HOMEWORK PROBLEMS YOURSELF. If you have an unexcused absence from the
class the day an exam is administered, you will receive a ZERO for the exam.
Excused absences are only given prior to class.
Suggested Readings: Rao 1.6

1. Elements of Engineering Electromagnetics, N.N. Rao, Prentice Hall,


5th Edition, 2010 (recommended). ON RESERVE

2. Fundamentals of Applied Electromagnetics, F.T. Ulaby, E. Michielssen,


U. Ravaioli, Prentice Hall, 6th Edition, 2010. (elementary) ON RESERVE

3. Electromagnetic Wave Theory, J.A. Kong, (2nd Ed.) John Wiley & Sons
(advanced). ON RESERVE

4. Microwave Engineering, David M. Pozar, 4th edition, John Wiley&Sons, 2011.


ON RESERVE.

5. Electromagnetic Fields and Waves, Magdy F. Iskander, 2nd edition,


Waveland Press, 2013.

6. Electromagnetic Theory, J.A. Stratton, McGraw-Hill. (advanced).

7. Applets can be found at the following site: [Link]


Homework assignments will include problems that make use of these applets.
Students enrolled in this course are expected to have access to a PC with
Google Chrome or better yet Mozilla Firefox.
Virtual Workstations: Subscribers to this class will have access to virtual
workstations with software for EDUCATIONAL USE ONLY. The workstations can
be access through: [Link] [Link].
[Link]/. The software will be used for numerical simulations. The software should
include (at least):
AUTOCAD 2021;

MATLAB R2021a with code such as multidiel.m, which can be found at:
[Link] (2019). Follow the instructions
copied into C:\ewa\. To use it, run addpath C:\ewa. Don't use the mathworks
URL [Link]
electromagnetic-waves-antennas-toolbox; and

COMSOL 6.0 Classroom Kit with the following modules:


Mathematics module
Wave Optics module
Microfluidics module
CFD module
AC/DC module
Convection-Diffusion Equation module
Churchill, Brown, Verhey, Complex Variables and Applications, Ch. 8, Sec. 76

In mathematics, a conformal map is a function that locally


preserves angles, but not necessarily lengths. More formally, let
and be open subsets of . A function is called conformal (or
angle-preserving) at a point if it preserves angles between
directed curves through , as well as preserving orientation.
Conformal maps preserve both angles and the shapes of
infinitesimally small figures, but not necessarily their size or
curvature. The conformal property may be described in terms of
the Jacobian derivative matrix of a coordinate transformation.
The transformation is conformal whenever the Jacobian at each
point is a positive scalar times a rotation matrix (orthogonal with
determinant one). Some authors define conformality to include
orientation-reversing mappings whose Jacobians can be written
as any scalar times any orthogonal
Matrixs.
For mappings in two dimensions, the (orientation-preserving)
conformal mappings are precisely the locally invertible complex
analytic functions. In three and higher dimensions, Liouville's
theorem sharply limits the conformal mappings to a few types.
The notion of conformality generalizes in a natural way to maps
between Riemannian or semi-Riemannian manifolds
Rao 1.6

1. Electromagnetics is old
• Magnetostatics (2500 BC, Chinese) (loadstone+amber)
• Electrostatics (600 BC, Thales of Miletus, Greek)
• Coulomb (1725, French)
1 q1q2
Fe  (Coulomb, 1725)
4 0 r 2

m1m2 (Newton, 1680)


Fg  G 2
r
For two electrons:
2
Fe  e  1
   4 1042!
Fg  m  4 0G
where e  1.6 1019 C and m  9.11031 kg
and G  6.67 1011 Nt / kg 2 m 2 and  0  8.85418782 × 10-12 F/m(s 4 A 2 /m3kg)
Astronomers put the number of particles in the universe at somewhere between 10 72and 1087; the age of the universe is
13.75 billion years, and there is about  x107 seconds in a year. Currents >3.7x1050 A are difficult to rationalize.
How big is the number 1042?
1. Distance:
• Dimension of the observable universe ~ 8.81026 m
(3.261010 parsec or 0.931011 light-years
• Radius of an atomic nucleus ~ 10-15 m = 1 fm
• universe/nucleus ~ 1042

2. Time:
• Age of the universe ~ 1017 sec
(13.75 billion years)
• Lifetime of a strange particle
(decaying by strong force) ~ 10-25 sec
• universe/particle ~ 1042

Astronomers put the number of particles in the universe at somewhere between1072and 1087; the age of the universe is
13.75 billion years, and there is about  x107seconds in a year. Currents >3.7x1050A are difficult to rationalize.
Maxwell’s Equations: (electricity, magnetism and light are all the
same—electromagnetic fields)

B
 E  
t
D
 H  J
t
JAMES CLERK MAXWELL
(1831-1879)
OLIVER HEAVISIDE
(1850-1925)
D  
• Color photos B  0
• Maxwell-Boltzman distribution
Classical electromagnetism starts and ends with (James Clerk) Maxwell’s
(Heaviside) equations.

   
• Ampere (1826)  H  dl   J  ds
c s
  d  
• Faraday (1831))  E  dl    B  ds
c
dt s
  d    
• Maxwell (1860)  H  dl   D  ds   J  ds
c
dt s s
 
• Gauss (1870))  D  ds  q  x 
t  
s t  
c2 
, x 
 x  t  y  y, z   z
    1
 2
1
2

• Lorentz (1900) F  q( E  v  B) c2 c2

SPEED OF CAUSALITY

[Link]
ELECTROMAGNETIC QUANTITIES:

E  Electric Field

H  Magnetic Field

 D  Electric Flux ( Displacement ) Density
Vector 
quantities B  Magnetic Flux ( Induction) Density
in space 
J  Current Density
 
 D  Displacement Current Density
 t
 
  Charge Density    dV  Q 
V 
  Dielectric Permittivity
  Magnetic Permeability
2. Electromagnetics is important:
• Foundation of electrical engineering
low frequency
EM theory circuit theory
high frequency
EM theory optics
THz
EM theory spectroscopy

• Daily life: TVs, ICs, smart phones, communications


antenna, radar, lasers, remote sensing,
microwave oven, GPS, MRI….
3. Electromagnetics is new (and exciting):
• Stealth
• Invisibility
• VCSELs/lasers
• Wireless communications
• Fractal antenna design
• High speed, high density integrated circuits (ICs)
3. Electromagnetics is new (and exciting)
• Stealth
• Invisibility
• VCSEL/lasers
• Wireless
• Fractal antenna design
• High speed, high density ICs
• Remote sensing:
Earth surface
Atmosphere
airport security
ground penetrating radar
autonomous vehicles (GPS)
• Medicine:
Magnetic resonance imaging
Optical Coherence Tomography
Optical trapping—single molecule spectroscopy
[Link]

Most medical applications rely on detecting a


radio frequency signal emitted by excited
hydrogen atoms in the body (e.g. water) using
energy from an oscillating magnetic field applied
at the appropriate resonant frequency. The
orientation of the image is controlled by varying
the main magnetic field using gradient coils.
HOLOGRAPHIC OPTICAL TWEEZERs FLUIDIC conveys/sorts cells:
(HOT)+. . .

HOT arrays organize cells:

. . .+ HYDROGEL ENCAPSULATION
photopolymerized polyethylene glycol diacrylate (PEGDA) Trapping: Ashkin 1985 fluid: Whitesides et al. 2000
Stealth and Invisibility

Metamaterials are artificial media structured on a size


scale smaller than the wavelength of external stimuli.
Materials of interest exhibit properties not found in
nature, such as negative index of refraction. Apps
include lenses for imaging beyond the diffraction limit
and “cloaking” devices.
One of the most interesting things about a smart phone is that it is actually a
radio—an extremely small radio, but still a radio nonetheless. Wireless
communications can trace its roots to the invention of the radio by Nikolai Tesla
in the 1880s. On the other hand, the wired telephone was invented by Alexander
Graham Bell in 1876. Since Tesla was smarter than Bell, it follows that wireless
communications would eventually supersede the wired version so….
A smart phone in pieces. Smart phones are some
of the most intricate devices we use on a daily
basis. A modern phone has to process millions of
calculations per second in order to compress and
decompress the voice and data stream. But when
you take a cell phone apart, you find that it contains
just a few individual parts: a system-on-a-chip
(SoC) circuit board containing the “brains” of the
phone; memory and storage; an antenna; a liquid
crystal (LCD) or light emitting diode (LED) display;
a keyboard; a modem; a camera; sensors such as
an accelerometer; gyroscope; compass; light
sensor; proximity sensor; a microphone; a speaker;
and a battery.

In the dark ages before smart phones, people who really needed mobile-
communications installed radio telephones in their cars. In the radio-telephone
system, there was one central antenna tower per city, and maybe 20 channels
available on that tower. The central antenna meant that the phone in your car
needed a powerful transmitter—enough to transmit 40 or 50 miles. It also meant
that not many people could use radio telephones due to the limited number of
channels.
Smart phones have low-power transmitters in them. Many phones have a signal strength
ranging from: 1.6-2 Watts (for comparison, most citizen-band (CB) radios transmit at 4
watts). The base station is also transmitting at low power. Low-power transmitters have two
advantages:
1. The transmissions of a base station and the phones within its
cell do not make it very far outside that cell. Therefore, the cells
can re-use the same frequencies and the same frequencies can
be re-used extensively across the city/landscape.

2. The power consumption of the smart phone, which is normally


battery-operated, is relatively low. Low power means small
batteries and long battery lifetime. This is what has made hand-
held cellular phones possible.

3. The cellular approach still requires a large number of base


stations in a city of any size. A typical large city can have
hundreds of towers, but because so many people are using cell
phones, costs remain low per user. Each carrier in each city also
runs one central office called the mobile telephone switching
office (MTSO) that handles all of the phone connections to the A cell phone tower.
normal land-based phone system, and controls all of the base
stations in the region. A cell-phone tower is typically a steel pole
or lattice structure that rises hundreds of feet into the air. The
figure on the right shows a typical cell-phone tower with three
different cell-phone providers riding on the same structure.
You have probably noticed that almost every radio you see (like your phone, the
radio in your car, etc.) has an antenna. Antennas come in all shapes and sizes,
depending on the frequency the antenna is trying to receive. The antenna can be
anything from a long, stiff wire (as in the AM/FM radio antennas on most cars) to a
satellite dish. Radio transmitters also use extremely tall antenna towers to transmit
their signals. The main idea behind an antenna in a radio transmitter is to launch the
radio waves into space. In a receiver, the idea is to pick up as much of the
transmitter's power as possible and supply it to the tuner.

Let's say that you are trying to build a radio tower for radio station 890AM (WLS
in Chicago). It is transmitting a sine wave with a frequency of 890 kHz. In one cycle
of the sine wave, the transmitter is going to switch the signal back and forth
changing polarity. If the transmitter is running at 890 kHz, that means that every
cycle completes in (1/890,000) 1.12 ms; one half of that cycle is 0.56 ms. The speed
of light is ~300,000 km/sec (or 3x108 m/sec or 186,000 miles/second).That means
the optimal antenna size for a 890 kHz transmitter, which corresponds to a half of a
wavelength at transmitter, is about 3x108 m/sec  0.281ms = 168 meters or 548
feet. So, AM radio stations need very tall towers. On the other hand, for a phone
working at 900/1800 MHz, the optimum antenna size is about 16.6/8.3 cm or 6/3
inches. This is why cell phones can have such short antennas. (You might have
noticed that the AM radio antenna in your car is not 600 feet long—it is only a couple
of feet long. If you made the antenna longer it would receive better, but AM stations
are so strong in a city—WLS has a 50 kW transmitter outside Chicago—that it
doesn't really matter if your antenna is the optimal length.)

So then, why the tall cell phone tower?—line-of-sight.


Line-of-sight commonly refers to telecommunication links that rely on a
line-of-sight between the transmitting antenna and the receiving antenna.
Such capability is necessary for high frequency links that offer relatively high
bandwidth communication circuits. Typical operating frequencies are in the
high Mega-Hertz (MHz) to Giga-Hertz (GHz) frequency range where the
radio path is not reflected or refracted to any great extent.

Finally, If you look at the base of the tower, you can see that each provider
has its own equipment, and you can also see how little equipment is involved
today (older towers often have small buildings at the base). The box at the
base houses the radio transmitters and receivers that let the tower
communicate with the phones. The radios connect to the antenna on the
tower through a set of thick cables. If you look closely you will see that the
tower and all of the cables and equipment at the base of the tower are
heavily grounded. For example, in the figures the green wires bolt onto it is a
solid copper grounding plate.
The ionosphere, the electrically conducting region of the upper atmosphere, plays a role in
essentially all radio propagation. At extremely low frequency (ELF: 3-3000 Hz) and very low
frequency (VLF:3-30 kHz), the ground and the ionosphere are good electrical conductors and
form a spherical earth-ionosphere waveguide suitable for long range communications (and
navigation) (left figure).
A simple approximation of the earth-ionosphere waveguide is the flat-earth model (right
figure). The earth and ionosphere are modeled as an infinite parallel-plate waveguide with the
curvature of the earth and the ionosphere, and the fringing fields neglected. According to this
model, the earth is a ground plane located at x  0 and the ionosphere boundary is at x  h.
(This model is valid for distances up to half an earth radius from the source, because at greater
distances the curvature of the earth affects ELF propagation.)
The ionosphere is most generally treated as inhomogeneous and anisotropic cold plasma. In
the ionosphere, the constituent gases are ionized due to ultraviolet radiation from the Sun,
resulting in positively charged ions and negatively charged electrons. The positive ions are
relatively immobile compared to the electrons. The electron density in the ionosphere exists in
layers (denoted by D, E, F) in which the ionization changes with the hour of the day, the season
and the sunspot cycle. In the daytime, the D-layer, which comprises the base of the ionosphere,
is about h  70 km high above the earth.
The displacement field of a TM mode A scanning electron micrograph of a triangular
traveling around a sharp bend array of air columns in GaAs that constitute a
carved in a photonic crystal. two-dimensional photonic crystal.
Rao 1.6

Complex numbers are defined by points or vectors in the complex plane,


and can be represented in Cartesian coordinates:

z  a  ib i  1
or in polar (exponential) form:

z  A exp(i )  A cos( )  iAsin( )


a  A cos( ) real part
b  A sin( ) imaginary part
where:
b
A  a b 2 2
  tan  
1

a
Im

z
b

b
A A  a 2  b2   tan 1  
a

a Re

z  A exp(i )  A exp(i  i 2n )


Every complex number has a complex conjugate: i.e.

z  a  ib  z*  ( a  ib )*  a  ib i  1
so that:
z  z*  (a  ib)  (a  ib)
 a  b  z  A2 A  a 2  b2
2 2 2

In polar form we have:

z  A exp(i )  A cos( )  iA sin( )


 z*  ( A exp(i ))*  z  A exp(i )
 A exp(i 2  i )
 A cos( )  iA sin( )
The polar form is more useful in some cases. For instance, when
raising a complex number to a power, the Cartesian form:
z  (a  ib)  (a  ib) (a  ib)
n

is cumbersome, and impractical for noninteger exponents. In


polar form instead, the result is immediate obvious:

z n  A exp(i )n  An exp(in )


In the case of roots, one should remember to consider  + 2k as the
argument of the exponential, with k = integer, otherwise some of the
possible roots are skipped:
  2 k 
n
z  A exp(i )  A exp(i  i 2k )  A exp i  i
n n n

 n n 
The results corresponding to angles up to 2 are solutions of the
root operation.
In electromagnetic analysis it is often convenient to keep in mind
the following simple identities:

   
EULER’S IDENTITY:
i  exp i   i  exp  i   1  exp i  "Read
LEONHARD EULER
Euler, he is the
 2  2 master of
Laplace
us all,”

It is also useful to remember the following expressions for trigonometric functions:


exp(iz)  exp(iz) exp(iz)  exp(iz)
cos( z )  sin( z ) 
2 2i
exp(z )  exp( z ) exp(z )  exp( z )
cosh( z )  cos(iz)  sinh( z )  i sin(iz) 
2 2
all resulting from Euler’s formula:

exp(iz)  cos( z)  i sin( z)


The complex representation is very useful for time-harmonic functions of the
form:
A cos(t   )  ReA exp(it  i )
 ReA exp(i ) exp(it )
 ReA exp(it )
The complex quantity:

A  A exp(i )
contains all the information about amplitude and phase of a time-harmonic
signal and is called the phasor of:
A cos(t   )
If it is known that the signal is time-harmonic with frequency , the
phasor completely characterizes its behavior.
Often, a time-harmonic signal may be of the form:
EULER’S FORMULA:
Asin(t   ) exp(iz)  cos( z)  i sin( z)
and so we have the following complex representation:
A sin(t   )  Re iAcos(t   )  i sin(t   ) 
 Re iA exp(it  i )
 ReA exp i / 2  exp(i ) exp(it )
 ReA expi   / 2  exp(it )

 Re A exp(it ) 
with phasor:

A  A exp(i(   / 2))
This result is not surprising, since:
cos(t     / 2)  sin(t   )
Time differentiation can be greatly simplified through the use of phasors.
For instance, consider the signal:

V (t )  V0 cos(t   ) with phasor  V  V0 exp( i )


The time derivative can then be expressed as: A sin(t   )  Re iAcos(t   )  i sin(t   ) 
 Re iA exp(it  i )

V (t )  ReA exp i / 2  exp(i ) exp(it )

 V0 sin(t   )  ReA expi    / 2 exp(it )


 ReA exp(it )
t A  A exp(i(   / 2))

 ReiV0 exp(i ) exp(it )

V (t )
 iV0 exp(i )  iV is the phasor of
t
Likewise, time integration can also be greatly simplified by the use of phasors.
Consider for instance the signal:

V (t )  V0 cos(t   ) with phasor  V  V0 exp( i )


The time integration can be expressed as: A sin(t   )  Re iAcos(t   )  i sin(t   ) 
t  Re iA exp(it  i )
V0  ReA exp i / 2  exp(i ) exp(it )

 V (t )dt   sin(t   )

 ReA expi    / 2 exp(it )
 ReA exp(it )

A  A exp(i(   / 2))

 V0 
 Re exp(i ) exp(it )
 i 

t
V0 V
 exp(i ) 
i i
is the phasor of:
 V (t )dt

With phasors, time-differential equations for time-harmonic signals
can be transformed into relatively simple algebraic equations. Consider
the simple circuit below, realized with lumped elements

L R

v (t) i (t) C

This circuit is described by the integro-differential equation:


t
di(t ) 1
v (t )  L  Ri   i (t )dt
dt C 
Upon time-differentiation we can eliminate the integral as:
t
di (t ) 1
v (t )  L  Ri   i (t )dt
dt C 
dv(t ) d 2i ( t ) di(t ) 1
L 2
R  i (t )
dt dt dt C
If we assume a time-harmonic excitation, we know that voltage and current
should have the form:

v(t )  V0 cos(t  V )  V  V0 exp(iV ) phasor

i (t )  I 0 cos(t   I )  I  I 0 exp(i I ) phasor

Thus, if V0 and V are given,

 I0 and I are the unknowns of the problem.


The differential equation can now be rewritten using phasors:
v(t )  V0 cos(t  V )  V  V0 exp(iV )
2
dv(t ) d i (t ) di(t ) 1 i (t )  I 0 cos(t   I )  I  I 0 exp(i I )
L 2
R  i (t ) V (t )
dt dt dt C t
 iV0 exp(i )  iV

ReiV exp(it ) 
t
V0 V
 V (t )dt  i exp(i )  i


 
 L Re   2 I exp(it )  R ReiI exp(it )
1
C
ReI exp(it )
Finally, the transformed phasor equation is obtained as:
v(t )  V0 cos(t  V )  V  V0 exp(iV ) phasor

i (t )  I 0 cos(t   I )  I  I 0 exp(i I ) phasor

 1 
V   R  i L  i I  Z  I
 C 
where  1 
  
Z R  i  L  
impedance resistance  C
reactance
The result for the phasor current is simply obtained as:
V V
I   I 0 exp(i I )
Z  1 
 R  iL  i 
 C 
which readily yields the unknowns I0 and I in terms of known quantities.

The time-dependent current is then obtained from:


i (t )  ReI 0 exp(i I ) exp(it )
 I 0 cos(t   I )
We can also just solve the problem by including the integral to find:
t
di(t ) 1 v(t )  V0 cos(t  V )  V  V0 exp(iV )
v(t )  L  Ri   i (t )dt i (t )  I 0 cos(t   I )  I  I 0 exp(i I )
dt C  V (t )
 iV0 exp(i )  iV
v(t )  V0 cos(t  V )  V  V0 exp(iV ) t
t
V0 V
 V (t )dt  i exp(i )  i
i (t )  I 0 cos(t   I )  I  I 0 exp(i I ) 

1 I 
ReV exp(it )  L Re iI exp(it ) R ReI exp(it ) Re exp(it )
C  i 
1 I
V  L(i ) I  RI 
C i
with the same result: i.e.

 1 
V   R  iL  i  I  ZI
 C 
The phasor formalism provides a convenient way to solve time-harmonic
problems in steady state, without having to solve directly a differential
equation. The key to the success of phasors is that with the exponential
representation one can immediately separate frequency and phase
information. Direct solution of the time-dependent differential equation is
only necessary for transients.
Integro-differential Algebraic equations
equations Transform based on phasors
i (t) = ? I=?

Direct Solution Solution


( Transients )

Anti-
Transform
i (t) I
The phasor representation of the circuit example above has introduced the
concept of impedance, i.e.
 1 
  
Z R  i  L  
impedance resistance  C
reactance
Note that the resistance is not explicitly a function of frequency. On the other
hand, the reactive components are linear functions of frequency:

Inductive component  proportional to 


Capacitive component  inversely proportional to 

Because of this frequency dependence, for specified values of L and C, one


can always find a frequency at which the magnitudes of the inductive and
capacitive terms are equal:
1 1
r L   r 
rC LC
This is a resonance condition. The reactance cancels out and the
impedance becomes purely resistive.
The peak value of the current phasor is maximum at resonance: i.e.
V V
I   I 0 exp(i I )
| I0 | Z  1 
 R  iL  i 
 C 
IM
V0
I0 
2
 1 
R   L 
2

 C 

r  L
R
v (t) i (t) C
Consider now the circuit below where an inductor and a capacitor are in
parallel: i.e. I L
R

C
V

The input impedance of the circuit is:


1
 1  iL
Zin  R    iC   R 
 iL  1   2
LC
iL
R in series @L||C: Z in  R 
1   2 LC
When:

0 Z in  R
1
 Z in  
LC
 Z in  R

At the resonance condition:


1
r 
LC
The part of the circuit containing the reactive components behaves like an open
circuit, and no current can flow. The voltage at the terminals of the parallel circuit
is the same as the input voltage V.
Now, consider the input impedance of a transmission line circuit, with an
applied voltage v(t) inducing an input current i(t).

i(t)

v(t) Zin

For sinusoidal excitation, we can write:


v (t )  V0 cos(t )
i (t )  I 0 cos(t   )     / 2,  / 2
where  is the phase difference between voltage and current. Note that  = 0
only when the input impedance is real (purely resistive).
The time-dependent input power is given by:
P(t )  v(t )i (t )  V0 I 0 cos(t ) cos(t   )


V0 I 0
cos( )  cos(2t   )
2
[cos( x  y )  cos x cos y  sin x sin y
cos( x  y )  cos x cos y  sin x sin y ]
Thus, the power has two (Fourier) components:

(A) an average value:


V0 I 0
cos( )
2
(B) an oscillatory component with frequency 2f:
V0 I 0
cos(2t   )
2
FOURIER REPRESENTATION
V0 I 0 V0 I 0
P(t )  cos( )  cos(2t   )
2 2
A B
The power flow changes periodically in time with an oscillation like (B) about the
average value (A). Note that only when  = 0 do we have cos() = 1, implying that
for a resistive impedance the power is always positive (flowing from generator to
load).

When voltage and current are out-of-phase, the average value of the power has
lower magnitude than the peak value of the oscillatory component. Therefore,
during portions of the period of oscillation the power can be negative (flowing from
load to generator). This means that when the power flow is positive, the reactive
component of the input impedance stores energy, which is reflected back to the
generator side when the power flow becomes negative.

For an oscillatory excitation, we are interested in finding the behavior of the


power during one full period, because from this we can easily obtain the average
behavior in time. From the point of view of power consumption, we are also
interested in knowing the power dissipated by the resistive component of the
impedance.
We can also rewrite the time-dependent current as:

i(t )  I 0 cos(t   )  I 0 cos cost  I 0 sin  sin t


where we have used the trigonometry formula:
cos x  y  cos x cos y  sin x sin y
By substitution, this result yields an equivalent expression for the power:

P(t )  V0 cos(t ) I 0 cos(t   ) 


 V0 cos(t ) I 0 cos(t ) cos( )  V0 cos(t ) I 0 sin(t ) sin( )
V0 I 0
 V0 I 0 cos( ) cos (t ) 
2
sin( 2t ) sin( )
 
active ( real ) power2 
reactive power
cos 2 x  cos2 x  sin 2 x  2 cos2 x  1  1  2 sin 2 x
sin 2 x  2 sin x  cos x
V0 I 0
P(t )  V0 I 0 cos( ) cos (t ) 
2
sin( 2t ) sin( )
 
active ( real ) power 2 
reactive power
The active power corresponds to the power dissipated by the resistive
component of the impedance, and it is always positive.

The reactive power corresponds to the power stored and then reflected by
the reactive component of the impedance. It oscillates from positive to
negative during the period.

Until now, we have discussed properties of instantaneous power. Since we


are considering time-harmonic periodic signals, it is very convenient to
consider the time-average power defined as:
T
1
P(t )   P(t )dt
T 0
where T = 1 / f is the period of the oscillation.

We can use either the Fourier or the active/reactive power formulation to


determine the time-average power.
V0 I 0
P(t ) V0 I 0 cos( ) cos (t )  2
sin( ) sin( 2t )
 
2
active ( real ) power 
reactive power

Active/Reactive power representation:


T
1
P(t )   P(t )dt 
T 0
T T
1 1 VI
  V0 I 0 cos( )cos2 (t )dt   0 0 sin( ) sin( 2t )dt
T 0   T 0 2
1 cos 2t
2

V0 I 0
 cos( )  0 (cos 2 x  2 cos2 x  1)
2
This result tells us that the time-average power flow is the average of the
active power. The reactive power has ZERO time-average, since power is
stored and completely reflected by the reactive component of the input
impedance during the period of oscillation.
V0 I 0
P(t )  V0 I 0 cos( ) cos (t ) 
2
sin( 2t ) sin( )
 
active ( real ) power 2 
reactive power
The maximum of the reactive power is:

V0 I 0  V0 I 0
maxPreactive  max  sin( 2t ) sin( )   sin( )
 2  2
Since the time-average of the reactive power is ZERO, we often use the
maximum value (above) as an indication of the reactive power.

The sign of the phase  tells us about the imaginary part of the impedance
(reactance): i.e.

 > 0 The reactance is inductive: i.e.


Current is lagging with respect to voltage
Voltage is leading with respect to current

 < 0 The reactance is capacitive: i.e.


Voltage is lagging with respect to current
Current is leading with respect to voltage
If the net reactance is inductive: V  ZI  RI  iLI
Im

iL I

V
Current lags
>0

RI
I Re

 > 0 The reactance is inductive: i.e.


Current is lagging with respect to voltage
Voltage is leading with respect to current
1
If the net reactance is capacitive: V  ZI  RI  i I
Im
C

I RI
Re

Voltage lags
V
<0

- i I / C
 < 0 The reactance is capacitive: i.e.
Voltage is lagging with respect to current
Current is leading with respect to voltage
In many engineering problems, we use the root-mean-square (r.m.s.)
values of quantities. For a given signal:

v(t )  V0 cos(t )
the r.m.s. value is defined as:
T T
1 1
     (t )dt
2 2 2
Vrms V cos ( t ) dt V cos
T 0
0 0
T 0
2
1 1
 V0 0 cos ( )d 
2
V0
2   2
1 cos 2

2 

2 / 2
This result is valid for sinusoidal signals. Each signal shape corresponds
to a specific coefficient (peak factor = V0 /Vrms) that allows one to convert
directly from peak to r.m.s. values.
The peak factor for sinusoidal signals is:
V0
 2  1.414
Vrms
For a symmetric triangular signal the peak factor is:

V0 V0
 3  1.732
Vrms t

For a symmetric square signal the peak factor is simply:

V0
1 V0
Vrms
t
For a non-sinusoidal periodic signal, we can determine the r.m.s. value
by using a very important theorem of vector spaces. If we decompose the
non-sinusoidal signal into its Fourier components:

V (t )  Vav  V1 (t )  V2 (t )  V3 (t )     Vk (t )


k
then:
2

V (t )2rms   Vk (t )    Vk (t )rms


2

 k  rms k

So, the r.m.s. value of the signal is computed as:

Vrms  Vave
2
 (V1 ) 2rms  (V2 ) 2rms  (V3 ) 2rms  
In the story so far, we have used peak values for the amplitude of voltage
and current. In terms of r.m.s. values, the time-average power for a
sinusoidal signal is:

V0 I 0 V0 I 0
P(t )  cos( )  cos( )  Vrms I rms cos( )
2 2 2
Finally, we can relate the time-average power to the phasors of voltage
and current. Since:
v(t )  V0 cos(t )  ReV0 exp(it )
i (t )  I 0 cos(t   )  ReI 0 exp(i ) exp(it )
we have phasors:
V  V0 and I  I 0 exp( i )
The time-average power in terms of phasors is given by:

P(t )  ReV  I *  ReV0  I 0 exp(i )


1 1
2 2
1
 V0  I 0 cos( )
2
Note that one must always use the complex conjugate of the phasor
current to obtain the time-average power. It is important to remember this
when voltage and current are expressed as functions of each other. Only
when the impedance is purely resistive does I = I* = I0 since  = 0.

Also, note that the time-average power is always a real positive quantity
and that it is NOT the phasor of the time-dependent power. It is a
common mistake to think so.

You might also like