0 ratings0% found this document useful (0 votes) 46 views3 pagesMidterm 1 Solutions
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
MATH 207200 . .
February 15th, 2019 Midterm solutions L.van Veen
xsin(xr) -1=0
This equation has a unique solution x* € [2,3]
(a) [10 marks] Find x* with an error less than 0.05 using bisection. For every iteration, list the left
boundary, the right boundary and an upper bound for the error.
Solution: see table below:
It 0, 1=2.000000, r=8.000000, err=1.000000e+00
Tt 1, 1-2,500000, z=2.000000, 000000e-01
Tt 2, 1=2.750000, 7=2.000000, 500000e-01
Tt 3, 1=2.750000, r=2.975000, err=1.2500008-01
Tt 4, 1=2.750000, 7=2.212500, err=6.250000e-02
Tt 5, 152.750000, T=2.781250, err=3.125000e-02
Any point in the interval (2.750000, 2.781250) isan approximation to x” with an error less than or
equal to 0.05.
(b) [10 marks] Starting from the initial guess (0) = 3, use Newton's method to find x" with a residual
less than 10°. For every iteration, list the approximation x,, the step size v,., x, and the
residual.
Solution: sce table below:
Iteration 1: appromimation is 2.0000008+00, step e1:
Iteration 2: approximation is 2.796158e+00, step si:
Iteration 3: approximation is 2.72948e+00, etep ei:
028420e-01 rea=5.766400e-01
320951e-02 res=5.320499e-02
236999e-04 res-7.651798e-04
604445e-08 ree=1.692231e-07
so an approximate solution with a residual less than 107° is x = 2.772608,
(e) [Smarks] For each of the following statements, indicate if they are true or false:
Iteration 4: approximation is 2.72605e+00, etep ei:
1. Newton iteration converges to x° only when starting from one, unique, initial point in [2,3]
False, in fact it converges for x) in some open neighbourhood af x*
2. The fact that the equation has a unique solution in [2,3] is a necessary condition far bisection to
False. Bisection can converge even if there are multiple solutions
3. Newton iteration takes fewer FLOPS than bisection to compute x° up to 13 significant digits
when starting the former from x(°) =3 and the latter from [2,3].
True. One would need one more Newton iteration starting from the result of part (b) but the
same accuracy can only be achieved with bisection after about 40 more iterations starting from
the result of part (a). Even though one Newton iteration casts about twice as many FLOPs as
‘one bisection step, it still costs fewer FLOPs to reach the desired accuracy.
4. Ifyou start Newton iteration from some initial point in [2,3], all iterates (Le. all consecutive
approximate solutions) will be in [2,3
False. For instance, if you start the Newton iteration above fram rj = 2.1, you converge to a
solution outside [2,3]
5. Ifbisection converges, the error is reduced by the same factor in each iteration.
True. That factor is 1/2.
2. [10 marks] Let the matrix A be given by
‘Compute the factorisation PA = LLI using partial pivoting. where P is a permutation matrix, Lis unit
lower triangular, and U is upper triangular. Write down the matrices P, I. and U at all intermediate
steps.‘MATIT 20720 ‘Midterm examination Page 2 of 3
100 [4 31 i 0 o}
010 215 3 010
|: ® j lass a lo o a]
1 oo) 725 60 1s po)
os 1 0 0 33 42 0 10
[os o 4 lo “54 | f oo]
1 0 °} 5 60 15] 0 0 1]
a4 1 0 0 54 Od 100
abn | GB ES
L u P
3. Consider the following linear system:
1 30 10 6.0000
Ax=b,where A=] 20 59 20) and b=) 11.800
25 60 2 l nan]
Using a numerical method, you find the approximate solution to be
0.99960385
= | -1.99996435
1.000192,
(@) [10marks] What is the maximal relative error of your approximate solution?
(001416 9 sens
an = 02505
) [Smarks] Assuming that you know the matrix entries exactly, but you know only fivedigits of the
entries of b, how many digits of the entries of x can you expect to be cornect?
The condition number is of order 10° so we can expect 5 3 = 2 digitsof the approximate solution
tobe correct
4. A special property of lower triangular, square matrices is that the product of two lower triangular
matrices is also lower triangular. To be precise. if Aj; = 0 for j > i and B;; =0 for} > i than C = AB
satisfies Cj) = 0 for j > i
(a) [15 marks] Write a pseudo-cede for a function that computes the product of two lower triangular
matrices, Initialize the product as a zero matrix and compute only the elements on and below the
diagonal,
Function triangular_prod,
In: lower triangular n x 1 matrices A, B, Out: lower triangular matrix C= AB.
1L Initialize C as m xn array of zeros.
2 Fori
a. Forj i
i Pork =1...0
A. Clini] = Cj] + ARB, |
3. Output C.
Note that, in Python, the array indices start from 0, so when implementing this we must change
the indices in line A, accordingly: Secondly, the bounds for loop variable k can be changed to ji
tomake the function even mare efficient.MATH 2072U Midterm examination Page 3of3
(©) [15 marks] Compute the number of FLOPs it takes to complete the function on part (a) as a fune-
tion of the number of rows 1 of the input matrices.
There are 2 FLOPs in line A. Translating all loops inte-sums, we get
4FLOPs m(n 41)
The more efficient version, with inner loop k = j..., gives
sFLOPs = PP Y2=2P Pu paya2V (¢ 40) (it i)
Bae Bi a 5
n(n 1)(2n 44) n(n 41)
— 6 2
(©) [10 marks] Implement the psendo-code of part (a) in a function called triangular_prodpy. It
should take as input two m1 arrays with zeros above their diagonal and output their matrix
product as an n xn array with zeros above its diagonal.
See codes. on Blackboard.
(a) [10 marks] Write a seripst test prod.py that
1. generates two random lower triangular matrices
2. uses the function of part (c} to find their product and
3. verifies the result by comparing it to the output of acipy.matmul or acipy.dot.
See codes on Blackboard.
n>1,
fe) [10 marks (bonus}] Your function for triangular matrix-matrix multiplication can be at most 6
times faster than regular matrix-matrix multiplication (asymptotically for lange n). Is your ende
6 times faster? Then the 10 bonus marksare yours. [fnot, find a way to make your function more
efficient until it is 6 times faster. You can either compare the number of FLOPs in your code to that
regular matrix-matrix multiplication or demonstrate the difference in speed by measuring, the
wall time of both.
Regular matrix-matrix multiplication takes 2n' FLOPS in leading order (i.e. the largest term for
large enough 1, see lecture 9). The first version of the code in part (a-c), with inner loop k = 1...
takes 13 FLOPS in leading arder and is twice a fast. The second version, with inner loop.
takes 1/3 FLOPS in leading order and is thus up to six times faster.
joie