To all of you who have participated in Google Code Jam this year and have qualified in 1A / are going to qualify in 1B or 1C for round 2, please keep in mind that you have also qualified for Distributed Code Jam (first round on 14th May). It is an unconventional contest where relatively trivial problems are made interesting by scaling up the problem size and distributing it over mutiple systems. Read more about it here : https://code.google.com/codejam/resources/quickstart-guide
No CodeJam rounds clash with Distributed Code Jam rounds. Also, no registration is required apart from Code Jam qualification. Here's the schedule : https://code.google.com/codejam/schedule
Link for past contests (2015, 2016) : https://code.google.com/codejam/past-contests
Remember to go to Distributed Code Jam tab in any of the above links
If you have participated in this contest before, please share your experience, tips, todos, and not todos.
Unfortunately there's no possibility to actually send the solutions for past Distributed Code Jam contests.
There is even a note for this in FAQ:
Let's hope they publish it soon.
Last year, they opened the judge of all the previous problems to anyone who qualified for DCJ before DCJ Round 1. I think they'll do something similar this year!
And they did, but the judge is not working. That's their comment regarding the issue:
Can somebody explain the point of the 10 minute timeout on large submissions when they are not judged until the end of the contest anyway?
Or is it by some chance that it is like the practice round this year, where larges are judged during the contest?
Clarification would be greatly appreciated!
They are judged during the contest, you can see verdicts in 'View my submissions'.
https://code.google.com/codejam/resources/quickstart-guide#dcj "6. After you submit a solution for the Large dataset, you can still resubmit if you'd like, but you must wait 10 minutes between resubmissions. Only your last submission will be judged (after the contest)."
Of course! I was talking about the practice round, sorry if it was unclear.
For Problem C. rps, in the small subtask, I computed everything on master node (0), while every other node just returned -1 to
master node. It gives wrong answer on Distributed GCJ practise link. Here's my code.
PS: I have spent more than 12 hours on finding the mistake. Please help :(
vector <char> tmp
— why is it char?Got AC now. I wonder how could I not spot this. Thank you very much :)
I found it helpful to define a shell command like
Which copies
problem_002.h
toproblem.h
, and gets the local testing tool to compileproblem.cpp
and test it on 20 nodes. (It was necessary to build a 32-bit version ofparunner.exe
to get it to work, as only a 64-bit binary is provided for Windows.)A chain benchmark (where 100 nodes sequentially pass along a message) ran in under 400 ms, while when each message was the maximum allowed 8 MB, it took 16708 ms!
Something that remains slightly unclear is the part of the Quick-start guide that says
But it does seem to work: in my benchmark, each non-root node sends two separate messages to the root immediately (this should take probably 10 ms), while the root takes 160 ms to receive them all, which means that some of those second messages get sent before the root received the first message from that node. But the root does receive them. So what does the "cannot" mean? Does it mean that the secondSend(..)
blocks until the root sends an acknowledgement back to the node?Can one do D-large (Todd and Steven) better than binary search to compute rank for each value in the range?
Technical wise, the constraint "one can only submit large submission before small submission is judged correct" almost killed me (I submitted D-large without testing 3 seconds before the end of contest). The problem is that in the end, high load servers cause severe delay on Small feedback (apparently up to 6 minutes).
I would suggest allowing to submit Large subtask at any time (with delay, of course), but decouple it from Small subtask: only when Small subtask is correct, then count the potential score of Large subtask and execute the program.
Edit: round turnout for me: failing B, C and D large, but passed thanks to E-large. At least got lucky for starting the round 45 minutes late to the game :)
You could use the straightforward merge algorithm on each node if you knew for certain that you will use segments [l1, r1) and [l2, r2) of the given arrays. To balance the load, set .
And you can calculate these segments for each node using binary search. This problem is equivalent to "take the smallest k elements from these two sorted arrays".
What's the running time of this algorithm on DCJ servers? I simply partitioned the range [1,5e9] in ~450 parts and had the workers handle the ranges using binary search and merge and it passed in 1.8sec(I thought after reading the above that mine should have TLE'd)....
I was thinking a checkbox asking whether you want to automatically submit Large in case Small is right would be useful.
longqueue is long
Here's what I'm doing on the last problem:
The sum we need to compute decreases quickly (N elements at first, N / 100 elements in the 2nd step), so it's fast. I'm worried I could exceed the limit for number of sent messages though.
Rough calculation shows 100 * 99 * 98 * 97 * 96 > 10^9, so it takes only about 500 messages for master node :)
It's about 2x500 for me, that's why I'm worried.
Edit:
Below solution will probably time out.
I did not manage to submit the following solution (I needed 10 more minutes, maybe if they extended the contest like Codeforces...):
Sum previous results + run the interval for broken node again on node 0.
I just wonder if finding qod wihtin first 6-10 answers is possible.
I did this but I guess this will exceed the time limit by asking too much times GetValue(i) (as you said) — 0.2 micro seconds for one query and about 10^6 elements being asked 10 times = too much.
And by asking only a few times the chance of finding the death query decreases too much, the probability is (0.5)^(times_asked — 1)
I also did exactly this. How many times do you ask? I ask 7.
I used 8, but I guess we both will TLE = P
I think so. It's like choosing between WA and TLE. lol
Both ours solutions passed!
Same here, I also used 7 iterations and got accepted.
The time is 1121ms, and surprisingly I guess 10 iterations will be ok.
ivanilos athin Nice! Mine took only 718 ms. I think we can assume that the expected call time they give is a very very loose upper bound.
even 11 is ok, my code even ran faster than your and I don't know why.
I was more concerned about the second phase — recomputing the values by node 0. It should wait for the first processing and then it has to spend the same amount of time recomputing the values on the broken node's interval.
Edit. Maybe it is not an issue, since during second phase we do not ask multiple times about the same value.
The second phase needs to read only once for each value, so it probably takes like 0.2 secs, nowhere near the first.
Yeah, the queries before sending to the master is already close enough to TL:I used 8 queries for every element, so it is (108 / 99) * 0.2 * 10 - 6 * timesasked which is already about 1.6s if you ask 8 times. And then you do this again with master, which would get about 3.2s, TLE DxEdit, this is wrong, see zoomswk comment
I wonder if that solution pass. Given my luck and the fact that I did not submit it, there's high chance. On the other hand, if I had submitted, it would have failed :p.
Edit: I think my steps 1–5 are actually exactly the same as the above solution by Xellos; he just uses 100 in place of my million.
My solution was similar but perhaps more efficient. (Of course, I still don’t know if I got it right in the end… I spent ages and resubmitted small many times looking for a stupid bug.)
Below is a simplified description. (My implementation is a bit more complicated, but needlessly so.)
Designate one node as the master node and the remaining nodes as workers.
Compute sums in a standard distributed way. Each node first reads all of its numbers, stores them in an array and adds them up. Then it rereads some numbers a million times. (In fact, it is enough to store the value of one number and reread the same number a million times.) If any of the rereads does not equal the stored value, the node reports to the master that it is broken; otherwise, it reports the sum. The probability of a false negative is 2 - 1000000, a ridiculously small number.
The master node adds up all valid sums and reallocates the broken range among the other worker nodes.
Repeat step 2.
Repeat step 3.
Compute sums in a distributed way, but slightly differently:
The probability of missing the broken index is 2 - 1000, which is still plenty small.
The master node adds up all valid sums, the prefix sum reported by the newly broken node and the individual values at indices in
range(i+1, end)
reported by the newly broken node. It then outputs the total.Each query takes 0.2 µs, so the total time required for the rereads in steps 2 and 4 is just 0.4 s. The maximum range size is about 106 in step 2, 104 in step 4 and 102 in step 6, so the remaining queries take 0.2 s + 0.002 s + 0.00002 s + 0.02 s (for the rereads in step 6). The total time taken by all queries should thus be well below a second.
A key insight here is that you can read the whole range once and then start rereading numbers. If the range contains iqod, then the node will break during the initial read, and all rereads will be broken.
Thus, you don’t need to reread each index many, many times to make sure you don’t miss iqod (unless you’re trying to actually determine iqod exactly, which you should only do when the range is reasonably small); you just need many rereads in total (which can be of any indices within the node’s allocated range).
From now I'll definitely remember that + has higher order of precedence than ^.
I'll remember to use parentheses.
I will remember that it's better to submit with
cerr
-s than try to comment allcerr
-s and comment other lines by accident.Works for me perfectly (however be cautious with e.g. "cerr<<Dfs(1)")
I'll remember to think before submitting
I'll rember including cstdlib before using srand.
Anyone else solve this assuming an antagonistic GetValue function? Seems like a much more interesting problem; though perhaps it would be too hard to make good test data.
What do you mean by antagonistic?
I mean that the "randomness" can be any distribution. In particular it could always give consistent (but possibly wrong) results.
I thought I was called. How about this strategy?
Let partition the array into the N/(numNodes/2) size blocks. Then, node 2i+0 will traverse block i from left to right and node 2i+1 will traverse block i from right to left.
Send a block from 2i+0 to 2i+1 and compare these. If two are the same then the block has true values (broken or not doesn't matter). Otherwise, the block has the death position. Process recursively as this comment.
Edit: I missed the fact that broken node can't be used for further queries. What msg555 says below is this thing.
Yeah, this is basically what I ended up with. The one thing I would add is that for the small test case you end up running out of nodes and need to switch to an alternate "three node" search method.
The "three node" search involves one node scanning the array forwards and one node scanning the array backwards. Any values that the two nodes agree on is surely correct, as it is impossible for both the nodes to have seen the QOD at the time they made the corresponding queries. In particular, the value at the QOD index is agreed upon by both nodes. Therefore we can use a third node to scan only the values the first two nodes don't agree upon, dodging the QOD and therefore being assured of the queries correctness.
Nore says he has a solution with (2k-1)n nodes and (at most) n^k values : each value should be visited by (2k-1) different nodes, and the intersection of visiting nodes between two values should contain at most (k-1) nodes, the majority of visiting nodes for each value is correct. (I didn't check myself that it is possible with (2k-1)n and n^k). I don't know if it can be distributed well enough. It can be used to solve this round's large with n=3 and k=17, but not the small (it would require 51 nodes).
I'm just posting some more details for the record: the idea was to consider the n^k values as a k-dimensional vector space over the field Z/nZ (if n is not prime, it might still work, I am not sure), and find (2k-1) hyperplanes in general position in that vector space (using the trick of Shamir's secret sharing scheme works if 2k-1 >= n, but I am not sure whether it still is possible to generate those hyperplanes in general position for 2k-1 < n). It can be distributed but in a way that is not very efficient, since nearly every node will need to communicate with nearly every other to send them the relevant results. Finally, a quick analysis show that this method will yield a maximum of 2^O(n) values for n nodes, while anta's solution seems to be 2^Theta(n log n), which is a bit better. Also, is anyone able to prove an upper bound on the number of values when using n nodes and asking for a solution that is correct with probability one and with bounded running time?
Actually the three-node search shows you can do value with 3 nodes. For 1 node you can obviously not do better than 1 value, while for 2 nodes, it is not possible to do 3 values because the IOD might be the first value looked at by the first or by the second node, and you will then have no way of knowing which node is right about the third value.
How does one use the Testrun to test solutions? If I don't include an input file, I get undefined function errors. If I do, the file is not found. Are we supposed to implement these functions within the tested code? Copy them from the input files, and somehow make them slow? Confused.
You are supposed to include it in your code. In Java at least you copy the input class, say the pancakes class for the pancakes problem, and then make the class static. You can obviously change the class, say change GetStackSize() to return 10^8 and GetStackItem() to return whatever you want for input in range [0, 10^8). Then you get somewhat of an idea of how fast your code runs, with the caveat that obviously the function calls do not take the listed times, they are faster. However it is still useful to check correctness (in the sense that there is no runtime exception or anything, obviously it always returns WA), and get a bit of an idea if your code runs way too slow.
Thanks!
I want sleep! When will results be ready???
Email just sent, results are delayed, noooooooooo ...
Results are up! Check the submissions page / scoreboard everything is judged!
Also just to note there are some errors with the scoring. For instance it shows me with 2 wrong tries for E, but in reality I only had 1 wrong try. This is because I submitted a correct solution, and then an incorrect solution, and somehow it counted the incorrect solution as occurring before the correct one. As a result the code shown in the solution download is also for my incorrect submission, rather than the correct one.
n=1 in E is like most evil thing I can imagine.
I use this is piece of code in all of my codes on DCJ. Here I needed >=3 nodes xd.
I'm pretty sure in most of the problems you may just delete it. I mean nodes can iterate over empty range just fine
But in this very same problem, when I deleted it I got RTE (locally) because I was referring to last element of interval which was sometimes -1... I agree that in most problems not closing will not bring anything bad, but in some it will, whereas closing unnecessary nodes when in fact they are necessary is extremely rare.
I wonder why D-large was worth 30 while E-large only 29.
In D-large "almost bruteforce" binsearch with no distribution at all (except for summing up the answers) works fine.
IMO problem E is a little bit harder.
I personally solved E faster than D, although it may be because I found E interesting and was able to "bite into" it better, but I agree that E is harder than D — needs a better idea. Maybe the round authors were trying to compensate for E-small being worth more (or just trolling us).
Maybe because of the very easy solution described above: http://codeforces.me/blog/entry/51596?#comment-360193?
Which passed E-large.
Does anybody know if people who will not qualify for the next round will be allowed into any more practice rounds this year? Or do we wait until 2018 to upsolve?