In the last few lectures we discussed the mean value theorem (which basically relates a function
and its derivative) and its applications. We will now discuss a result called Taylor’s Theorem which
relates a function, its derivative and its higher derivatives. We will see that Taylor’s Theorem is
an extension of the mean value theorem. Though Taylor’s Theorem has applications in numerical
methods, inequalities and local maxima and minima, it basically deals with approximation of
functions by polynomials. To understand this type of approximation let us start with the linear
approximation or tangent line approximation.
Linear Approximation : Let f be a function, differentiable at x0 ∈ R. Then the linear polynomial
P1 (x) = f (x0 ) + f 0 (x0 )(x − x0 )
is the natural linear approximation to f (x) near x0 . Geometrically, this is clear because we approx-
imate the curve near (x0 , f (x0 )) by the tangent line at (x0 , f (x0 )). The following result provides
an estimation of the size of the error E1 (x) = f (x) − P1 (x).
Theorem 10.1: (Extended Mean Value Theorem) If f and f 0 are continuous on [a, b] and
f 0 is differentiable on (a, b) then there exists c ∈ (a, b) such that
f 00 (c)
f (b) = f (a) + f 0 (a)(b − a) + (b − a)2 .
2
Proof (*): This result is a particular case of Taylor’s Theorem whose proof is given below.
If we take b = x and a = x0 in the previous result, we obtain that
M
| E1 (x) | = | f (x) − P1 (x) | ≤ (x − x0 )2
2
where M = sup{| f 00 (t) |: t ∈ [x0 , x]}. The above estimate gives an idea “how good the approxima-
tion is i.e., how fast the error E1 (x) goes to 0 as x → x0 ”.
Naturally, one asks the question: Can we get a better estimate for the error if we use approx-
imation by higher order polynomials. The answer is yes and this is what Taylor’s theorem talks
about.
There might be several ways to approximate a given function by a polynomial of degree ≥ 2,
however, Taylor’s theorem deals with the polynomial which agrees with f and some of its derivatives
at a given point x0 as P1 (x) does in case of the linear approximation.
The polynomial
f 00 (x0 ) f (n) (x0 )
Pn (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + (x − x0 )2 + ... + (x − x0 )n
2! n!
has the property that Pn (x0 ) = f (x0 ) and P (k) (x0 ) = f (k) (x0 ) for all k = 1, 2, .., n where f (k) (x0 )
denotes the k th derivative of f at x0 . This polynomial is called Taylor’s polynomial of degree n
(with respect to f and x0 ).
The following theorem called Taylor’s Theorem provides an estimate for the error function
En (x) = f (x) − Pn (x).
Theorem 10.2: Let f : [a, b] → R, f, f 0 , f 00 , ..., f (n−1) be continuous on [a, b] and suppose f (n)
exists on (a, b). Then there exists c ∈ (a, b) such that
f 00 (a) f (n−1) (a) f (n) (c)
f (b) = f (a) + f 0 (a)(b − a) + (b − a)2 + ... + (b − a)n−1 + (b − a)n .
2! (n − 1)! n!
2
Proof (*): Define
f 00 (x) f (n−1) (x)
F (x) = f (b) − f (x) − f 0 (x)(b − x) − (b − x)2 − ... − (b − x)n−1 .
2! (n − 1)!
(b−a)n (n)
We will show that F (a) = n! f (c) for some c ∈ (a, b), which will prove the theorem. Note
that
f (n) (x)
F 0 (x) = − (b − x)n−1 . (1)
(n − 1)!
Define g(x) = F (x) − ( b−x n
b−a ) F (a). It is easy to check that g(a) = g(b) = 0 and hence by Rolle’s
theorem there exists some c ∈ (a, b) such that
n(b − c)n−1
g 0 (c) = F 0 (c) + F (a) = 0. (2)
(b − a)n
f (n) (c) n(b−c)n−1
From (1) and (2) we obtain that (n−1)! (b − c)n−1 = (b−a)n F (a). This implies that F (a) =
(b−a)n (n)
n! f (c). This proves the theorem. ¤
Let us see some applications.
Problem 1 : Show that 1 − 12 x2 ≤ cosx for all x ∈ R.
Solution : Take f (x) = cosx and x0 = 0 in Taylor’s Theorem. Then there exists c between 0 and
x such that
1 sinc 3
cosx = 1 − x2 + x .
2 6
Verify that the term sinc 3 1 2
6 x ≥ 0 when | x |≤ π. If | x | ≥ π then 1 − 2 x < −3 ≤ cosx. Therefore
the inequality holds for all x ∈ R.
Problem 2 : Let x0 ∈ (a, b) and n ≥ 2. Suppose f 0 , f 00 , .., f (n) are continuous on (a, b) and
f 0 (x0 ) = .. = f (n−1) (x0 ) = 0. Then, if n is even and f (n) (x0 ) > 0, then f has a local minimum at
x0 . Similarly, if n is even and f (n) (x0 ) < 0, then f has a local maximum at x0 .
Solution : By Taylor’s theorem, for x ∈ (a, b) there exists a c between x and x0 such that
f (n) (c)
f (x) = f (x0 ) + (x − x0 )n . (3)
n!
Let f (n) (x0 ) > 0 and n is even. Then by the continuity of f (n) there exists a neighborhood U of x0
(n)
such that f (n) (x) > 0 for all x ∈ U . This implies that f n!(c) (x − x0 )n ≥ 0 whenever c ∈ U . Hence
by equation (3), f (x) ≥ f (x0 ) for all x ∈ U which implies that x0 is a local minimum.
Problem 3 : Using Taylor’s theorem, for any k ∈ N and for all x > 0, show that
1 1 1 1
x − x2 + · · · + x2k < log(1 + x) < x − x2 + · · · + x2k+1 .
2 2k 2 2k + 1
Solution : By Taylor’s theorem, there exists c ∈ (0, x) s.t.
1 (−1)n−1 n (−1)n xn+1
log(1 + x) = x − x2 + ... + x + .
2 n n + 1 (1 + c)n+1
n
xn+1
Note that, for any x > 0, (−1)
n+1 (1+c)n+1
> 0 if n = 2k and < 0 if n = 2k + 1.
Joe Foster
The Taylor Remainder
Taylor’s Formula: If f (x) has derivatives of all orders in a n open interval I containing a, then for each positive integer
n and for each x ∈ I,
f 00 (a) f (n) (a)
f (x) = f (a) + f 0 (a)(x − a) + (x − a)2 + · · · + (x − a)n + Rn (x),
2! n!
where
f (n+1) (c)
Rn (x) = (x − a)n+1
(n + 1)!
for some c between a and x.
Definitions: The second equation is called Taylor’s formula. The function Rn (x) is called the remainder of order n or
the error term for the approximation of f (x) by Pn (x) over I.
If Rn (x) −→ 0 as n −→ ∞ for all x ∈ I, we say that the Taylor Series generated by f (x) at x = a converges to f (x) on I,
and we write
f (n) (a)
X∞
f (x) = (x − a)n .
n=0
n!
Often we can estimate Rn (x) without knowing the value of c.
The Remainder Estimation Theorem: If there is a positive constant M such that f (n+1) (t) ≤ M for all t between x
and a, inclusive, then the remainder term Rn (x) in Taylor’s Theorem satisfies the inequality
|x − a|n+1
|Rn (x)| ≤ M .
(n + 1)!
If this inequality holds for every n and the other conditions of Taylor’s Theorem are satisfied by f (x), then the series
converges to f (x).
Example 1: Show that the Taylor Series generated by f (x) = ex at x = 0 converges to f (x) for every value of x.
f (x) has derivatives of all orders on (−∞, ∞). Using the Taylor Polynomial generated by f (x) = ex at a = 0 and Taylor’s
formula, we have
x2 xn
ex = 1 + x + + ··· + + Rn (x)
2! n!
ec
where Rn (x) = xn+1 for some c between 0 and x. Recall that ex is an increasing function, so if 0 < |c| < |x|, we
(n + 1)!
know 1 < e|c| < e|x| . Thus,
ec |x|n+1 e|x| |x|n+1 |x|n+1
lim |Rn (x)| = lim ≤ lim = e|x| lim = 0.
n→∞ n→∞ (n + 1)! n→∞ (n + 1)! n→∞ (n + 1)!
Hence, since lim Rn (x) = 0 for all x, the Taylor series converges to ex on (−∞, ∞).
n→∞
Page 1 of 3
MATH 142 - The Taylor Remainder Joe Foster
x2
Example 2: Estimate the error if P2 (x) = 1 − is used to estimate the value of cos(x) at x = 0.6.
2
We are estimating f (x) = cos(x) with its 2nd degree Taylor polynomial (centred at zero), so we can bound the error by
using the remainder estimation Theorem, with n = 2. So,
f 3 (c) 3 | sin(c)| 3 1 3
Error = |R2 (x)| = x = |x| ≤ |x| = 0.036.
x=0.6 3! x=0.6 3! x=0.6 3! x=0.6
x3
Example 3: For approximately what values of x can you replace sin(x) by x − with an error of magnitude no greater
6
than 4 × 10−3 ?
We wish to estimate f (x) = sin(x) with its 3rd degree Taylor polynomial (centred at zero), so first we bound the error
using the remainder estimation theorem:
f 4 (c) 4 | sin(c)| 4 1
Error = |R3 (x)| = x = x ≤ x4 .
4! 4! 4!
We want the error to be less than or equal to 4 × 10−3 , so we solve the following inequality,
1 4 √
x ≤ 0.004 =⇒ |x| ≤ 4! · 0.004 ≈ 0.556.
4
4!
Thus the values of x in the interval [−0.556, 0.556] can be approximated to the desired accuracy.
Note that the approximations in the previous two examples can be improved by using the Alternating Series Estimation
Theorem instead.
Example 4: Use the remainder estimation
theorem to estimate the maximum error when approximating f (x) = e by
x
2
x 5 5
P2 (x) = 1 + x + on the interval − , .
2 6 6
We wish to estimate f (x) = ex with its 2nd degree Taylor polynomial (centred at zero), so first lets bound the error for a
general x:
f (3) (c) 3 ec
Error = |R2 (x)| = x ≤ |x|3 ,
3! 3!
5 5 5
where c lies between a = 0 and x. Now, since we are looking at only the interval − , , we have that |c| < for each x
6 6 6
in this interval. So, ec ≤ e5/6 , since ex is an increasing function.
Now we apply some guessing work. We are approximating values of ex , so it doesn’t seem right to use one of these values
in our bound (if we could get the value of e5/6 then why would we merely approximate?), so we should bound e5/6 . There
are many ways to do this, and you may use any justification you see fit. We shall use,
e5/6 < e1 < 3.
5
Thus, for |x| ≤ , the error can be bounded by
6 3
ec e5/6 3 3 5
Error ≤ |x|3 ≤ |x| ≤ = 0.289.
3! 3! 3! 6
Page 2 of 3