The typography and design inside the book is beautiful but the cover looks like an old bargain-store flyer. There's an expression about that, something about judging a book by its cover? I can't remember exactly.
yababa_y 7 hours ago [-]
i've been missing this knowledge! thank you for the recommendation.
gus_massa 10 hours ago [-]
Nitpicking:
> You’re never going to get error less than 10E−15 since that’s the error in the tabulated values,
If you have like 100 (or 400) values in the table, you can squeeze one more digit.
In the simple case, imagine you have the constant pi, with 15 digits, repeated 100 times. If the rounding was done in a random direction like
floor(pi * 1E15 + random() - 1/2 ) / 1E15
then you can use statistic to get an average with an error that is like K/sqrt(N) where K is a constant and N is the number of samples. If you were paying attention in you statistic clases, you probably know the value of K, but it's not my case. Anyway, just pick N=100 or N=400 or something like that and you get another digit for "free".
Nobody builds a table for a constant and uses a uniform offset for rounding. Anyway, with some smooth function that has no special values in the points where it's sample, the exact value of the cut decimal is quite "random" and you get a similar randomization.
_0ffh 6 hours ago [-]
Also nobody in his right mind uses lookup tables where the table value is actually the float approximation of the true f(x) - you choose the support values to minimize an error (e.g. mse) of a dense sampling of your interpolated value over x (or, in the limit, the integral of the chosen error function between the true curve and the interpolation of your supports). If you want to e.g. approximate a convex function using linear interpolation, all the tabulated values f'(x) would be <= the true f(x).
AlotOfReading 2 hours ago [-]
The true value is far more useful in a lot of cases. If you're building a table indexed by the upper mantissa bits of the float, for example, it's difficult to distribute the error properly across all intervals.
http://182.160.97.198:8080/xmlui/bitstream/handle/123456789/...
which is all about the kind of numerical analysis you would do by hand and introduces a lot of really cool math like the calculus of differences
https://en.wikipedia.org/wiki/Finite_difference
> You’re never going to get error less than 10E−15 since that’s the error in the tabulated values,
If you have like 100 (or 400) values in the table, you can squeeze one more digit.
In the simple case, imagine you have the constant pi, with 15 digits, repeated 100 times. If the rounding was done in a random direction like
then you can use statistic to get an average with an error that is like K/sqrt(N) where K is a constant and N is the number of samples. If you were paying attention in you statistic clases, you probably know the value of K, but it's not my case. Anyway, just pick N=100 or N=400 or something like that and you get another digit for "free".Nobody builds a table for a constant and uses a uniform offset for rounding. Anyway, with some smooth function that has no special values in the points where it's sample, the exact value of the cut decimal is quite "random" and you get a similar randomization.