xiaowuc1's blog

By xiaowuc1, 5 years ago, In English

Some of the North American ICPC teams were upsolving NWERC 2011 and on problem G ran into issues where Bellman-Ford would not properly detect the presence of a negative weight cycle. Here's an example of a submission that uses Floyd-Warshall with no epsilons and gets AC, and here's an example of a submission that uses Bellman-Ford with no epsilons and gets WA. I believe that Bellman-Ford gets AC with no epsilon if you use long double.

The problem statement has this claim that an error of magnitude less than 1e-7 in one distance computation will not affect the answer. What is the proper way to interpret this claim in the context of implementing a solution to this problem? Furthermore, is the Bellman-Ford algorithm actually not robust in this manner, whereas Floyd-Warshall is robust?

  • Vote: I like it
  • +42
  • Vote: I do not like it

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

The case on which we got WA has no cycle. My bellman-ford sets distance[v]=distance[u]+weight(u, v) at one iteration and then believes distance[v]>distance[u]+weight(u, v) at the next iteration (so it sets distance[v]=distance[u]+weight(u, v) again...). This weird thing continues for n iterations and a "negative cycle" is detected magically. Could any one explain why this happens?

As tested by us, only on windows (not ubuntu) does this happen. Without -O2, one can code explicitly "double x = distance[u]+weight(u, v); if(distance[v]>x){do something}" to avoid this issue (and get AC). With -O2, that does not work. (Because it is optimized out by -O2?) Does windows compare a and b + c with some method other than "x=b+c; compare a and x;"?

»
5 years ago, # |
  Vote: I like it +18 Vote: I do not like it

Wow, if I understand correctly

1) you do some operations where error of machine's epsilon could change the verdict (based on what Chelly said)

2) you use doubles instead of long doubles

and you are still surprised about some random verdicts?