2022-01-19 Conditioning
Contents
2022-01-19 Conditioning¶
Office hours: Monday 9-10pm, Tuesday 2-3pm, Thursday 2-3pm
More on floating point
Discuss Taylor Series activity
Condition numbers
Forward and backward error
Computing volume of a polygon
using Plots
default(linewidth=4)
Floating point representation is relative¶
Let \(\operatorname{fl}\) round to the nearest floating point number.
This also means that the relative error in representing \(x\) is small:
plot(x -> (1 + x) - 1, xlims=(-1e-15, 1e-15))
plot!(x -> x)
Exact arithmetic, correctly rounded¶
Take an elementary math operation \(*\) (addition, subtraction, multiplication, division), and the discrete operation that our computers perform, \(\circledast\). Then
with a relative accuracy \(\epsilon_{\text{machine}}\),
Seems easy, how do operations compose?¶
Is this true?
f(x; y=1, z=-1) = (x+y)+z # The best arbitrary numbers are 0, 1, and -1
plot(x -> abs(f(x) - x)/abs(x), xlims=(-1e-15, 1e-15))
Which operation caused the error?¶
\(\texttt{tmp} = \operatorname{fl}(x + 1)\)
\(\operatorname{fl}(\texttt{tmp} - 1)\)
Use Julia’s BigFloat
@show typeof(big(.1))
@show big(.1) # Or BigFloat(.1); parsed as Float64, then promoted
@show BigFloat(".1"); # Parse directly to BigFloat
typeof(big(0.1)) = BigFloat
big(0.1) = 0.1000000000000000055511151231257827021181583404541015625
BigFloat(".1") = 0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
tmp = 1e-15 + 1
tmp_big = big(1e-15) + 1 # Parse as Float64, then promote
abs(tmp - tmp_big) / abs(tmp_big)
1.102230246251563524952071662733800140614440125894379682676737388538642032894338e-16
r = tmp - 1
r_big = big(tmp) - 1
abs(r - r_big) / abs(r_big)
0.0
Activity: 2022-01-12-taylor-series¶
Use Julia, Jupyter, Git
Look at how fast series converge when taking only finitely many terms
Explore instability, as is occuring for large negative
x
above, but not for the standard libraryexpm1
function myexp(x, k)
sum = 0
term = 1
n = 1
# modify so at most k terms are summed
while sum + term != sum
sum += term
term *= x / n
# YOUR CODE HERE
if n == k
break
end
n += 1
end
sum
end
rel_error(x, k) = abs(myexp(x, k) - exp(x)) / exp(x)
ks = 2 .^ (0:10) # [1, 2, 4, ..., 1024];
plot(ks, k -> rel_error(-20, k), xscale=:log10, yscale=:log10)
plot(x -> rel_error(x, 1000) + 1e-17, xlims=(-20, 20), yscale=:log10)
What happened?¶
Let’s look at the terms for positive and negative \(x\)
function expterms(x, k=50)
term = 1.
terms = [term]
for n in 1:k
term *= x / n
push!(terms, term)
end
terms
end
x = -10
@show sum(expterms(x)) - exp(x)
@show (sum(expterms(x)) - exp(x)) / exp(x)
expterms(x)
sum(expterms(x)) - exp(x) = 1.2092180894291739e-12
(sum(expterms(x)) - exp(x)) / exp(x) = 2.6634800885273225e-8
51-element Vector{Float64}:
1.0
-10.0
50.0
-166.66666666666669
416.66666666666674
-833.3333333333335
1388.8888888888891
-1984.1269841269846
2480.1587301587306
-2755.7319223985896
2755.7319223985896
-2505.2108385441725
2087.6756987868107
⋮
-4.902469756513546e-8
1.2256174391283866e-8
-2.9893108271424062e-9
7.117406731291443e-10
-1.655210867742196e-10
3.761842881232264e-11
-8.359650847182808e-12
1.81731540156148e-12
-3.866628513960596e-13
8.055476070751242e-14
-1.64397470831658e-14
3.28794941663316e-15
@show exp(-10)
bar(expterms(-10))
exp(-10) = 4.5399929762484854e-5
Conditioning¶
What sort of functions cause small errors to become big?
Consider a function \(f: X \to Y\) and define the absolute condition number
Floating point offers relative accuracy, so it’s more useful to discuss relative condition number,
Condition numbers¶
\[ \kappa = |f'(x)| \frac{|x|}{|f|} \]
f(x) = x - 1; fp(x) = 1
plot(x -> abs(fp(x)) * abs(x) / abs(f(x)), xlims=(0, 2))
Back to \(f(x) = e^x - 1\)¶
f(x) = exp(x) - 1
fp(x) = exp(x)
plot(x -> abs(fp(x)) * abs(x) / abs(f(x)))
What does it mean?¶
The function \(f(x) = e^x - 1\) is well-conditioned
The function \(f_1(x) = e^x\) is well-conditioned
The function \(f_2(x) = x - 1\) is ill-conditioned for \(x \approx 1\)
The algorithm is unstable¶
f(x) = exp(x) - 1
is unstableAlgorithms are made from elementary operations
Unstable algorithms do something ill-conditioned
A stable algorithm¶
We used the series expansion previously.
accurate for small \(x\)
less accurate for negative \(x\) (see activity)
we could use symmetry to fix
inefficient because we have to sum lots of terms
Standard math libraries define a more efficient stable variant, \(\texttt{expm1}(x) = e^x - 1\).
plot([x -> exp(x) - 1,
x -> expm1(x)],
xlims = (-1e-15, 1e-15))
Another example \(\log(1 + x)\)¶
What is the condition number of \(f(x) = \log(1 + x)\) for \(x \approx 0\)?
plot([x -> log(1 + x),
x -> log1p(x)],
xlims = (-1e-15, 1e-15))
cond1(x) = abs(1/(1+x) * x / log1p(x))
cond2(x) = abs(1/x * x / log(x))
plot([cond1 cond2], xlims=(-1, 2), ylims=(0, 100))
Reliable = well-conditioned and stable¶
Mathematical functions \(f(x)\) can be ill-conditioned (big \(\kappa\))¶
Modeling is how we turn an abstract question into a mathematical function
We want well-conditioned models (small \(\kappa\))
Some systems are intrinsically sensitive: fracture, chaotic systems, combustion
Algorithms f(x)
can be unstable¶
Unreliable, though sometimes practical
Unstable algorithms are constructed from ill-conditioned parts
An ill-conditioned problem from Paul Olum¶
From Surely You’re Joking, Mr. Feynman (page 113)
So Paul is walking past the lunch place and these guys are all excited. “Hey, Paul!” they call out. “Feynman’s terrific! We give him a problem that can be stated in ten seconds, and in a minute he gets the answer to 10 percent. Why don’t you give him one?” Without hardly stopping, he says, “The tangent of 10 to the 100th.” I was sunk: you have to divide by pi to 100 decimal places! It was hopeless.
What’s the condition number?
\(f(x) = \tan x\)
\(f'(x) = 1 + \tan^2 x\)
tan(1e100)
-0.4116229628832498
tan(BigFloat("1e100", precision=500))
0.4012319619908143541857543436532949583238702611292440683194415381168718098221194
Go find some functions…¶
Find a function \(f(x)\) that models something you’re interested in
Plot its condition number \(\kappa\) as a function of \(x\)
Plot the relative error (using single or double precision; compare using Julia’s
big
)Is the relative error ever much bigger than \(\kappa \epsilon_{\text{machine}}\)?
Can you find what caused the instability?
Share on Zulip