-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inverse circular functions are inaccurate for large arguments #1
Comments
There is surprisingly little info on evaluating asin/acos. One thing: (asin(x - 1) + pi/2)/sqrt(x) is a very gently sloping curve - should be easy to approximate. Generous info on atan: https://stackoverflow.com/questions/23047978/how-is-arctan-implemented |
I did a little experiment, 4 term endpoint constrained polynomial fits that above curve with 1e-6 avg error. Not bad but would need a lot better for double double. I might look further. |
Looks promising! There are tests (currently SKIPPED) which you can use to benchmark your solution. |
I would if GMP wasn't a nightmare to install in Windoze. That's part of what lead me here ;) |
Comment: forward circular functions could use improvement too; Taylor works but there are much better (faster) ways. Part of me wants to work on it but right now I can't commit the time. |
One more comment for today: I think if I implemented acos I'd do something like this: The fourth order expansion for y = cos(x) can be exactly inverted. That yields:
The closer you are to y = 1 (x = 0), the more accurate. At y = 0.9999999999999, the above differ by less than 10^-34. So my strategy would look like this: if y > 0.9999999999999, use above inverted 4th order expansion as-is. Otherwise, do a variable substitution and work on inverting cos(sqrt(u)). That's very close to linear between u = 0 and cos(...) = 0. Fit a polynomial to that, then do a couple Newton iterates (note - can do better in terms of speed). Note - this initialize + Newton strategy wouldn't well work near y = 1 because steps become 0/0. Hence the need for two different regimes. If I had more time/patience, fitting a minimax polynomial/s to cos(sqrt(u)) would work best. |
Good strategy! |
Where exactly does acos break down? I did some experiments around the stuff I mentioned and wasn't really able to improve - I suspect it's because taking square root and squaring is losing precision. |
It should fail for values very close to one IIRC. You can look at the tests – last time I checked, if you comment out the Since you don't have MPFR available, you could also compute it with Mathematica or Julia BigFloat and then compare. Depending on your algorithm, the square root may in exceptional circumstances be problematic, because suppose you have (1 – x) with s small x then its square root is roughly (1 – x/2), so you lose a digit of precision. I'd be surprised if that mattered though. |
I see it, thanks. Oh but I'm taking sqrt near zero, not 1. Is there a way to print a DDouble as one number with 30 or so decimal places, instead of two doubles? I think it's interesting: on closer examination, our approaches aren't that different. What you're calling a Taylor expansion I'm calling Newton's method, same thing really in this case. Important difference is that I'm using a change of variables which nearly linearizes it, including near acos(x) = 0, and mine's set up for iteration. What I'm seeing is that for inputs near 1 the last six digits or so never converge. Less than about 0.999, one iteration gets to two digits, further iterations don't help. |
I have thought about this some more, and I think at least with
so, in other words, for values within the numerical roundoff of ddouble we have a relative spread of the function values of There is no such way to print DDoubles yet, unfortunately. |
I don't quite understand that, but I have a counterargument: I wondered if losing precision with sqrt was my issue, I rely on it heavily. It may very well be, as I both sqrt and square, information is lost in that process. However, I noticed that if I do Babylonian iterates on libxrec's sqrt output, there is no improvement: iterates generally don't change the answer. Babylonian iteration converges fast and is super stable. This tells us that sqrt is more or less perfect. Thought so, but interesting. Well, acos(x) near 1 acts just like sqrt(x) at 0. So if sqrt can be computed so well, acos(x) should have similar hope. Yet its Newton iterates oscillate near x = 1. |
Oh I do understand now. You make a good point. |
In general, the trigonometric functions and their inverses are quite accurate (though we are not squeezing the last ounce of accuracy yet).
The exception is
asin
andacos
for values close to plus/minus one andatan
for large arguments. I suppose the Taylor expansion is not really working there anymore, but one would have to investigate.The text was updated successfully, but these errors were encountered: