- the flag ‘f’ returns the exact representation of the floating point number, as-is, without any approximation. Obviously the precision is limited by the original data type of the floating-point number (32-bits for singles and 64-bites for doubles)

– the flag ‘r’ (the default one) will try to be smart and compensate for existing round-off error to get a “nicely expressed’ number (“nice” being of the supported forms of p/q, p*pi/q, sqrt(p), 2^q, 10^q with integers p and q).

– the flag ‘e’ will return the same thing as ‘r’ in addition to the difference between that and the exact ‘f’ representation, where the estimation error is expressed in “eps” machine epsilon.

– the flag ‘d’ will take the exact ‘f’ representation and “round it” to the specified number of digits taken from the current “digits()” function value. In other words sym(x,’d’) is equivalent to vpa(sym(x,’f’),digits())

___

To confirm my understanding of the ‘f’ flag option, I tried it manually with the example you gave:

>> t = 0.1

>> fprintf(‘%bx’, t)

3fb999999999999a

Next we convert the hex notation to binary:

% the 64-bit double-precision floating-point representation of one-tenth

0 01111111011 1001100110011001100110011001100110011001100110011010

We separate the sign, exponent, and fraction parts and store them as bit vectors:

sign = 0

exponent = ‘01111111011’ – ‘0’

fraction = ‘1001100110011001100110011001100110011001100110011010’ – ‘0’

Now we compute the real value they represent (according to the definition of the IEEE 754 format):

% (here I’m only using SYM to keep the numbers expressed as quotients)

>> x = …

(-1)^(sign) * …

2^(sum(exponent .* (2.^sym(10:-1:0))) – 1023) * …

(1 + sum(fraction .* (2.^sym(-(1:52)))))

and we get the same thing returned by the SYM function with the ‘f’ flag:

>> x

x =

3602879701896397/36028797018963968

>> sym(t, ‘f’)

ans =

3602879701896397/36028797018963968

___

By the way, I think there is a typo in the documentation of SYM function regarding the ‘f’ flag. It says:

f stands for “floating-point.” All values are represented in the form N*2^e or -N*2^e, where N and e are integers, N >= 0. […]

Isn’t N supposed to be a *rational number* not an *integer*? (still non-negative of course).

]]>An example

>> t = .1

t =

0.1000

>> sym(t)

ans =

1/10

>> sym(t,’f’)

ans =

3602879701896397/36028797018963968

>> sym(t,’e’)

ans =

eps/40 + 1/10

>> sym(t,’r’)

ans =

1/10

>> sym(t,’d’)

ans =

0.10000000000000000555111512312578

Use ‘f’ if you want an exact conversion, avoiding sym’s trickery. Use ‘d’ with vpa. Use ‘e’ if you want to see floating point approximations expressed in terms of eps. Use ‘r’ to get an aggressive form of sym’s tricks, or maybe it’s just the same as sym without a flag, I forget.

— Cleve

I put explicit details (including code, output, and explanation) at

http://www.hbeLabs.com/lshape/index.html

that demonstrates clearly how to obtain the 150 digits or more. (I’m still making small edits, but it seems close

to what I want to say.)

Perhaps someone can reproduce it in matlab? Have a lot of fun!

–Bob Jones ]]>

You may be surprised to learn that I am applying your original point-matching method, but on steroids.

Include enough digits in the intermediate calculations and it is numerically very well behaved, i.e., no spurious solutions or ill-conditioning appears in this shape (and probably many others that exhibit exponential convergence). I use fractional order Bessel functions and properly symmetrized sinusoids centered on the re-entrant angle, my matching points are uniformly distributed as 3i/(N+1) for i=1..N along the length 3 matching edge (in the half-L), number of matching points equals number of terms in expansion, and no interior points per GSVD, nor extra points on the edges, for example. Calculate the determinant and be clever about root finding (working with dL/L=1E-150 and determinant values of 1E-200000 may scare people at first). Make sure Bessel functions (and sinusoid, etc) are good to such precision. As I increment N (by 3 for the L-shape, so same proportion of points stays in each segment), the approximate eigenvalue alternates above and below an asymptote while smoothly approaching it. As one increments N, the previous results can be used to help bracket the root. If no mistakes, it works perfectly from N=20 to 700, so far. (Change the distribution even a little and it may fail, but still works to some extent.) Note, exponential convergence rate is as expected, about 4 to 5 basis functions per digit, so nothing new there. I think the trade-off in numerics pays off as compared with other popular methods like GSVD, especially if one pursues many digits. It works for any (non-degenerate, non-closed-form) eigenvalue as long as you can get enough expansion terms. (Closed form eigenvalues converge many orders of magnitude faster, but the same pattern.)

Anyone can do it.

To get the 150+ digits below, I use up to N=708 and 711 matching points and perhaps 750 digits in the intermediate calculations. (Keep the precision well above the number of terms in the expansion.) I was greatly affected by your now famous paper in the early 1990s when I observed something similar (but not exactly, I wasn’t expanding about the correct angle), but it only made it into my thesis. Only now, I decided to pick up again, as a hobby. I am quite surprised that no one else saw that nice numerical behavior and ease with which one can bound eigenvalues. It was only last week that I first tried the L-shape (among others) and it worked like a charm. I had to share it with you.

9.6397238440219410527114592623648231562672895258219

064561095797005640356478633703907228731650087967888

3115756686104335651595260970541019403854362864964220

Lower bound, append 2007, upper bound, append 0851

Happy New Year!

Bob Jones