I was solving the problem 185B/186D. Then I encountered a weird issue. I got WA on test $$$8$$$ in this submission: 123642519. But, when I made the solve function inline, it passed test $$$8$$$ (though it got WA on test $$$31$$$): 123642504.
In test $$$8$$$, the submission without inlined solve function prints 2.712800440549510 6.287199559450490 0.000000000000000
as output. On the other hand, the submission with inlined solve function prints 2.700961045920403 6.299038954079596 0.000000000000000
as output. So, the outputs have significant precision differences.
My question is, how the inlined function has an impact on precision? It will be very helpful if anybody gives an insight.
A plausible explanation for this behavior is that the compiler increased the precision of the inline function variables to long double. And this increased precision caused the function to reach the correct answer after 200 iterations. You may check the following technical report for more details about this issue.
David Monniaux. The pitfalls of verifying floating-point computations. 2007. hal-00128124v1
Thank you. What I've found out from the paper about inline functions is:
So, if I have a function like
inline double f(double x)
and I call it with a long double value, there is a high chance that all calculations related to variablex
will be performed in long double.But that shouldn't be the case in my submissions as I've used double everywhere.
Actually, changing the data type of the floating-point variables in the solve function to long double was sufficient to get your code accepted without prefixing the function header with inline.
123689411
Yeah. Even the same code of mine using GNU C++17 (64) compiler got AC. Actually, I was not concerned about getting AC. I was only concerned about the weird inline behavior. Maybe you were right. Some sort of high precision was triggered by the C++ compiler somehow.
TL;DR — your solution is bad and you got lucky to pass test 8 with one version. Don't assume that inline will help you in the future.
That's not a correct way to check if real values are equal to 0. You need to compare with epsilon.
What happened is that some value like $$$x$$$ got very close to $$$0$$$ and you got a huge negative value as $$$log(x)$$$, maybe even smaller than $$$-inf$$$, which you hardcoded as $$$-2^{61}$$$.
Nested ternary search isn't the intended solution and it would be bold to claim that it works without big precision errors. Maybe it does. We use values from the range $$$(-inf, 10000)$$$ and that's a big range. I'm not sure fixing the comparison with $$$0$$$ is enough. I actually think it isn't. Nested ternary search is already scary (in terms of precision) for reasonable ranges.
Here's one possible fix: change each ternary search range from $$$[0, s]$$$ to $$$[eps, s]$$$, and use long doubles. You won't need to compare with 0 or use your own infinity value. This new version shouldn't yield huge errors but there's a catch: each of $$$x, y, z$$$ will be at least $$$eps$$$, even though it's maybe optimal to use $$$0$$$. This would be ok if the statement said "each of three printed values can have error 1e-6" but instead the statement wants a small error on the final sum value. You need to analyze if that is satisfied too.
Thanks for your explanation.
Yeah, that was a terrible mistake. I wanted to handle $$$log(0)$$$ and unfortunately forgot that I was dealing with floating-point values. Later, I got AC by modifying the condition of the while loop of ternary search to
while(hi-lo > eps)
: 123643009.But, taking the mistake in concern, shouldn't the submissions with and without inline function produce the same output and get WA on the same test case? Since the same mistake is occurring in both submissions.
-That's true. But, I'm just curious that what's actually happening here.
I don't know. I mostly understand precision but I don't know what C++ inline can do. Maybe CodingKnight is right that higher precision is triggered. I think that even using two same lines of code within one program can give you two slightly different values — just because e.g. one of them is inside a function and C++ compiler did some optimization magic. Don't quote me on that though.
I've written a blog on higher (excess) precision before https://codeforces.me/blog/entry/78161. I do not believe that inline is what is fundamentally causing the issue.
I think you will find the following examples very interesting:
Try running this code in g++ 32 bit with input
-0.0000000000000000001
(it will output0 1
). Also try adding the pow call and/or switching to g++ 64 bit (it will result in1 1
).Try running this in g++ 32 bit with the input
-0.0000000000000000001\n-0.0000000000000000001
. It will output false. Also try running it with the pow and/or g++ 64 bit, then the output will be true.Try running this code in g++ 32 bit with input
-0.0000000000000000001
(it will result inHACKED
). Also try removing the pow call and/or switching to g++ 64 bit (it will result inNot hacked
).The TLDR is that for legacy reasons g++ 32 bit uses long doubles for all its floating point calculations. Even a function declared as returning a double actually returns a long double. Only when a number is stored is it actually rounded to float/double. It is a mess. If you instead use g++ 64 bit, then floating point numbers will behave like you'd expect them to.
In my examples, the
x
represented as a long double is slightly smaller than1.0
. But if rounded to a double its value is exactly1.0
. The pow call causesx
to temporarily be stored, which rounds it up to1.0
.