Physics 114: Lecture 17 Least Squares Fit To Polynomial: Dale E. Gary
Physics 114: Lecture 17 Least Squares Fit To Polynomial: Dale E. Gary
1 1
a xi yi
xi2 xi3
, b
xi xi yi
xi3
2 2 2 2 2 2
i i i i i i
xi2 yi
2
i
xi3
2
i
xi4
2
i
xi2
2
i
xi2 yi
2
i
xi4
2
i
1
2
i
xi
2
i
yi
2
i
1
2
i
xi
2
i
xi2
2
i
1
c
xi xi2 xi yi
, where
xi xi2 xi3
2 2 2 2 2 2
i i i i i i
xi2
2
i
xi3
2
i
xi2 yi
2
i
xi2
2
i
xi3
2
i
xi4
2
i
hold on
y = 1.5x 2 + 3x - 2
10
plot(x,polyval(p,x),'r')
And the original function 5
std(resid)
prints ans = 1.9475 5
-5
-10
-3 -2 -1 0 1 2 3
x
Apr 12, 2010
MatLAB Example (cont’d):
Chi-Square for Fit
We could take our set of points, generated from a 2nd order polynomial, and
fit a 3rd order polynomial:
p2 = polyfit(x,y,3)
hold off
plot(x,polyval(x,p2),'.')
The fit looks the same, but there is a subtle difference due to the use of an
additional parameter. Let’s look at the standard deviation of the new
resid2 = y – polyval(x,p2)
std(resid2)
prints ans = 1.9312
Is this a better fit? The residuals are slightly smaller BUT check chi-square.
chisq1 = sum((resid/std(resid)).^2) % prints 60.00
chisq2 = sum((resid2/std(resid2)).^2) % prints 60.00
They look identical, but now consider the reduced chi-square.
sum((resid/std(resid)).^2)/58. % prints 1.0345
sum((resid2/std(resid2)).^2)/57. % prints 1.0526 => 2nd-order fit is preferred
Apr 12, 2010
Linear Fits, Polynomial Fits,
Nonlinear Fits
When we talk about a fit being linear or nonlinear, we mean linear in the
coefficients (parameters), not in the independent variable. Thus, a
polynomial fit is linear in coefficients a, b, c, etc., even though those
coefficients multiply non-linear terms in independent variable x, (i.e. cx2).
Thus, polynomial fitting is still linear least-squares fitting, even though we are
fitting a non-linear function of independent variable x. The reason this is
considered linear fitting is because for n parameters we can obtain n linear
equations in n unknowns, which can be solved exactly (for example, by the
method of determinants using Cramer’s Rule as we have done).
In general, this cannot be done for functions that are nonlinear in the
parameters (i.e., fitting a Gaussian function f(x) = a exp{[(x b)/c]2}, or sine
function f(x) = a sin[bx +c]). We will discuss nonlinear fitting next time, when
we discuss Chapter 8.
However, there is an important class of functions that are nonlinear in
parameters, but can be linearized (cast in a form that becomes linear in
coefficients). We will now take a look at that.
Apr 12, 2010
Linearizing Non-Linear Fits
Consider the equation
y( x) aebx ,
where a and b are the unknown parameters. Rather than consider a and b,
we can take the natural logarithm of both sides and consider instead the
function
ln y ln a bx.
This is linear in the parameters ln a and b, where chi-square is
2
1
2 ln yi ln a bx .
i
Notice, though, that we must use uncertainties i′, instead of the usual i
to account for the transformation of the dependent variable:
2
(ln yi ) 2 1 2 1
i 2 i 2 i i i.
y yi yi
x = 1:10; 0.2
y = 0.5*exp(-0.75*x); 0.15
sig = 0.03*sqrt(y); % errors proportional to sqrt(y)
y
0.1
dev = sig.*randn(1,10);
errorbar(x,y+dev,sig) 0.05
logy = log(y+dev); -1
2
x
plot(x,logy,’.’) -2
0
ln(y)
-6
-5
-8
y
and try a linear fit. Remember, to do a weighted 0.1
p = glmfit(x,logy,’normal’,’weights’,logsig); 0
p = circshift(p,1); % swap order of parameters
2 4 66 88 10
10
x
hold on -2
plot(x,polyval(p,x),’r’) -3
ln(y)
hold off -6
errorbar(x,y+dev,sig) -7
-8
hold on
-9
plot(x,exp(polyval(p,x)),’r’)
2 4 6 88 10
10