0% found this document useful (0 votes)
37 views

Physics 114: Lecture 17 Least Squares Fit To Polynomial: Dale E. Gary

This document summarizes the process of performing a least squares fit to polynomial data. It begins by reviewing a linear least squares fit, then extends this to quadratic and higher order polynomial fits. For a quadratic fit, the chi-square is minimized to obtain three equations in three unknowns (the coefficients a, b, and c). These coefficients can be solved for using determinants of 3x3 matrices. The process generalizes straightforwardly to higher order polynomials by using higher dimensional determinant calculations. MATLAB's polyfit command can be used to perform polynomial fitting up to any specified degree.

Uploaded by

Rahul Shinde
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Physics 114: Lecture 17 Least Squares Fit To Polynomial: Dale E. Gary

This document summarizes the process of performing a least squares fit to polynomial data. It begins by reviewing a linear least squares fit, then extends this to quadratic and higher order polynomial fits. For a quadratic fit, the chi-square is minimized to obtain three equations in three unknowns (the coefficients a, b, and c). These coefficients can be solved for using determinants of 3x3 matrices. The process generalizes straightforwardly to higher order polynomials by using higher dimensional determinant calculations. MATLAB's polyfit command can be used to perform polynomial fitting up to any specified degree.

Uploaded by

Rahul Shinde
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 12

Physics 114: Lecture 17

Least Squares Fit to


Polynomial
Dale E. Gary
NJIT Physics Department
Reminder, Linear Least Squares
 We start with a smooth line of the form
y ( x)  a  bx
which is the “curve” we want to fit to the data. The chi-square for this
situation is 2 2
 y  y ( x )   1 
2   i      yi  a  bx  
  i   i 
 To minimize any function, you know that you should take the derivative
and set it to zero. But take the derivative with respect to what?
Obviously, we want to find constants a and b that minimize  , so we will
2

form two equations:


2
 2  1   1 
     yi  a  bxi    2  2  yi  a  bxi    0,
a a   i   i 
2
 2  1  x 
     yi  a  bxi    2  i2  yi  a  bxi    0.
b b   i   i 

Apr 12, 2010


Polynomial Least Squares
 Let’s now allow a curved line of polynomial form
y( x)  a  bx  cx 2  dx3  ...
which is the curve we want to fit to the data.
 For simplicity, let’s consider a second-degree polynomial (quadratic). The
chi-square for this situation is
2 2
 y  y ( x)   
2   i
    
1

 yi  a  bx  cx 2

 i   i 
 Following exactly the same approach as before, we end up with three
equations in three unknowns (the parameters 2
a, b and c):
 2  1   1 
     yi  a  bxi  cxi2    2  2  yi  a  bxi  cxi2    0,
a a   i   i 
2
 2  1  x 
     yi  a  bxi  cxi2    2  i2  yi  a  bxi  cxi2    0,
b b   i   i 
2
 2  1 2   xi2 
     yi  a  bxi  cxi    2  2  yi  a  bxi  cxi2    0.
c c   i   i 
Apr 12, 2010
Second-Degree Polynomial
 The solution, then, can be found from the same determinant technique we
used before, except now we have 3 x 3 determinants:
 yi
2
i
  xi
2
i
xi2
2
i
 
1
2
i
yi
2
i
 xi2
2
i

1 1
a  xi yi
  xi2 xi3
, b    
xi xi yi
 xi3

 
2 2 2 2 2 2
i i i i i i

 xi2 yi
2
i
  xi3
2
i
xi4
2
i
  
xi2
2
i
xi2 yi
2
i
 xi4
2
i

  
1
2
i
xi
2
i
yi
2
i
  
1
2
i
xi
2
i
xi2
2
i

1
c     
xi xi2 xi yi
, where        
xi xi2 xi3


2 2 2 2 2 2
i i i i i i

   
xi2
2
i
xi3
2
i
xi2 yi
2
i
  
xi2
2
i
xi3
2
i
xi4
2
i

 You can see that extending to arbitrarily high powers is straightforward, if


tedious.
 We have already seen the MatLAB command that allows polynomial fitting. It
is just p = polyfit(x,y,n), where n is the degree of the fit. We have used
n = 1 so far.
Apr 12, 2010
MatLAB Example:
2nd-Degree Polynomial Fit
 First, create a set of points that follow a second degree polynomial, with
some random errors, and plot them:
 x = -3:0.1:3;
 y = randn(1,61)*2 - 2 + 3*x + 1.5*x.^2;
 plot(x,y,'.')
 Now use polyfit to fit a second-degree polynomial:
25
 p = polyfit(x,y,2) data1

prints p = 1.5174 3.0145 -2.5130 20 Polyfit


y(x)

 Now overplot the fit 15

hold on

y = 1.5x 2 + 3x - 2

10
 plot(x,polyval(p,x),'r')
 And the original function 5

 plot(x,-2 + 3*x + 1.5*x.^2,'g') 0

 Notice that the points scatter about -5

the fit. Look at the residuals. -10


-3 -2 -1 0 1 2 3
x

Apr 12, 2010


MatLAB Example (cont’d):
2nd-Degree Polynomial Fit
 The residuals are the differences between the points and the fit:
 resid = y – polyval(p,x)
 figure
 plot(x,resid,'.')
 The residuals appear flat and random, which is good. Check the standard
deviation of the residuals: 10

 std(resid)
prints ans = 1.9475 5

 This is close to the value of 2 we


used when creating the points.
Residuals
0

-5

-10
-3 -2 -1 0 1 2 3
x
Apr 12, 2010
MatLAB Example (cont’d):
Chi-Square for Fit
 We could take our set of points, generated from a 2nd order polynomial, and
fit a 3rd order polynomial:
 p2 = polyfit(x,y,3)
 hold off
 plot(x,polyval(x,p2),'.')
 The fit looks the same, but there is a subtle difference due to the use of an
additional parameter. Let’s look at the standard deviation of the new
 resid2 = y – polyval(x,p2)
 std(resid2)
prints ans = 1.9312
 Is this a better fit? The residuals are slightly smaller BUT check chi-square.
 chisq1 = sum((resid/std(resid)).^2) % prints 60.00
 chisq2 = sum((resid2/std(resid2)).^2) % prints 60.00
 They look identical, but now consider the reduced chi-square.
sum((resid/std(resid)).^2)/58. % prints 1.0345
sum((resid2/std(resid2)).^2)/57. % prints 1.0526 => 2nd-order fit is preferred
Apr 12, 2010
Linear Fits, Polynomial Fits,
Nonlinear Fits
 When we talk about a fit being linear or nonlinear, we mean linear in the
coefficients (parameters), not in the independent variable. Thus, a
polynomial fit is linear in coefficients a, b, c, etc., even though those
coefficients multiply non-linear terms in independent variable x, (i.e. cx2).
 Thus, polynomial fitting is still linear least-squares fitting, even though we are
fitting a non-linear function of independent variable x. The reason this is
considered linear fitting is because for n parameters we can obtain n linear
equations in n unknowns, which can be solved exactly (for example, by the
method of determinants using Cramer’s Rule as we have done).
 In general, this cannot be done for functions that are nonlinear in the
parameters (i.e., fitting a Gaussian function f(x) = a exp{[(x  b)/c]2}, or sine
function f(x) = a sin[bx +c]). We will discuss nonlinear fitting next time, when
we discuss Chapter 8.
 However, there is an important class of functions that are nonlinear in
parameters, but can be linearized (cast in a form that becomes linear in
coefficients). We will now take a look at that.
Apr 12, 2010
Linearizing Non-Linear Fits
 Consider the equation
y( x)  aebx ,
where a and b are the unknown parameters. Rather than consider a and b,
we can take the natural logarithm of both sides and consider instead the
function
ln y  ln a  bx.
 This is linear in the parameters ln a and b, where chi-square is
2
 1 
 2     ln yi  ln a  bx   .
  i 

 Notice, though, that we must use uncertainties i′, instead of the usual i
to account for the transformation of the dependent variable:
2
  (ln yi )  2 1 2 1
 i 2    i  2 i   i  i.
 y  yi yi

Apr 12, 2010


MatLAB Example:
Linearizing An Exponential
 First, create a set of points that follow the exponential, with some random
errors, and plot them: 0.25

 x = 1:10; 0.2

 y = 0.5*exp(-0.75*x); 0.15
sig = 0.03*sqrt(y); % errors proportional to sqrt(y)

y

0.1
 dev = sig.*randn(1,10);
 errorbar(x,y+dev,sig) 0.05

 Now convert using log(yi) – MatLAB for ln(yi) 0


2 4 6 8 10

 logy = log(y+dev); -1
2
x

 plot(x,logy,’.’) -2
0

As predicted, the points now make a pretty good


-3
-2
 -4
-4

straight line. What about the errors. You might

ln(y)
-6
-5
-8

think this will work:


-6
-10
-7
errorbar(x, logy, log(sig))
-12
 -8
-14

 Try it! What is wrong? -9


0 2 2 4 4 6 6 88 10
10
xx

Apr 12, 2010


MatLAB Example (cont’d):
Linearizing An Exponential
 The correct errors are as noted earlier:
 logsig = sig./y; 0.25

 errorbar(x, logy, logsig) 0.2

 This now gives the correct plot. Let’s go ahead 0.15

y
and try a linear fit. Remember, to do a weighted 0.1

linear fit we use glmfit(). 0.05

 p = glmfit(x,logy,’normal’,’weights’,logsig); 0
p = circshift(p,1); % swap order of parameters
2 4 66 88 10
10
 x

 hold on -2

 plot(x,polyval(p,x),’r’) -3

To plot the line over the original data:


-4
 -5

ln(y)
 hold off -6

 errorbar(x,y+dev,sig) -7

-8
 hold on
-9
 plot(x,exp(polyval(p,x)),’r’)
2 4 6 88 10
10

 Note parameters a′ = ln a = 0.6931, b′ = b = 0.75 x

Apr 12, 2010


Summary
 Use polyfit() for polynomial fitting, with third parameter giving the degree of
the polynomial. Remember that higher-degree polynomials use up more
degrees of freedom (an nth degree polynomial takes away n + 1 DOF).
 A polynomial fit is still considered linear least-squares fitting, despite its
dependence on powers of the independent variable, because it is linear in the
coefficients (parameters).
 bx
 For some problems, such as exponentials, y ( x)  ae ,, one can linearize the
problem. Another type that can be linearized is a power-law expression,
y( x)  axb ,
as you will do in the homework.
 When linearizing, the errors must be handled properly, using the usual error
propagation equation, e.g.
2
  (ln yi )  2 1 2 1
 i 2    i  2 i   i  i.
 y  yi yi

Apr 12, 2010

You might also like