Because I work regularly with developers on the MATLAB Math team, I see the questions that come to them from tech support. Today there was an interesting one.
A user observed that computing the sin function takes longer for very big numbers. The user asked, "Why is that?"
Well, first let's see if we can reproduce the observation. Start by making a vector with one million numbers in the interval [0,1].
rng(42) x = rand(1000000,1);
(I seeded the random number generator using rng so that the numbers below don't change each time I run this script.)
How long does it take to compute the sin of all those numbers?
f = @() sin(x); timeit(f)
ans = 0.002102730998500
Now repeat the timing experiment with another vector containing one million numbers in the interval [100000000,100000000+1].
x2 = x + 100000000; g = @() sin(x2); timeit(g)
ans = 0.027343141998500
The user is right, it does take longer!
Here's the straightforward explanation: For large numbers, it takes more computation to produce an accurate result.
To illustrate, let's look at the output of sin for one large number in MATLAB and compare it the result from another application. I'm going to use Excel for this comparison, but the results will be similar for code you write in C that calls the standard C math library function, sin().
z = x2(1)
z = 1.000000003745401e+08
matlab_result = sin(z)
matlab_result = 0.734111589042897
excel_result = 0.734111669990523
excel_result = 0.734111669990523
These two results agree to only 6 digits!
Let's use the Symbolic Math Toolbox to compute sin(z) with 20 decimal digits of accuracy.
sym_result = sin(sym(z))
sym_result = sin(3355443212567481/33554432)
ans = 0.73411158904289725019
From this calculation, you can see that the MATLAB result is accurate to 15 decimal digits, whereas the Excel result is accurate to only 6 digits.
That's called sweating the details.
Get the MATLAB code
Published with MATLAB® R2016b
Comments are closed.
9 CommentsOldest to Newest
Those are interesting insights into how numerical approximation works in Matlab.
I found one even more surprising example of rounding error that users should be aware of. Please check the following code:
a = 77617;
b = 33096;
result = (333.75-a^2)*b^6+(11*a^2*b^2-121*b^4-2)*a^2+5.5*b^8+a/b/2
a = sym(a);
b = sym(b);
f = (333.75-a^2)*b^6+(11*a^2*b^2-121*b^4-2)*a^2+5.5*b^8+a/b/2;
sym_result = vpa(f,20)
The results are not of the same sign even. Clearly this problem occurs in all computational software due to a limited representation of numbers.
Anry—Your expression is adding and subtracting numbers on the order of 10^37. The result you got was accurate to within about one part in a quintillion quintillion. That seems lucky.
Interesting. I compared this from my vsin() mex function written in C that calls the vdSin() of Intel Math Kernel Library:
x = rand(1e6,1);
x = x + 1e8;
0.051318654822932 % 11.2x slower
0.00418348750972378 % 1.7x slower
sym_result = vpa(sin(sym(x(1))),20)
x = rand(1e4,1) + 1e8;
sym_result = vpa(sin(sym(x)),20);
norm(sin(x) – double(sym_result))
norm(vsin(x) – double(sym_result))
sin function takes 11.2x longer for very big numbers, while vsin takes 1.7x.
vsin result seems to be more accurate than sin.
I know. The luck here is that the error is 2.
Are you allowed to explain why Matlab is more accurate? My guess is that it does some kind of mod(2pi) before calculating the sine – but then why don’t all algorithms do that?
lycao—Thank you for providing the comparison.
Tektotherriggen—When very large numbers are involved, the straightforward approach to range reduction loses precision.
Hmm, do you use some kind of compensated summation (e.g. Kahan’s algorithm), or something even more tuned for trigonometry?
Tektotherriggen—I don’t know the details.