Google Code Jam 2019 Round 2 will begin at 14:00 UTC. Top 1000 participants will advance to Round 3 and win a t-shirt.
Note that there will be no Distributed Code Jam this year.
Let's discuss the problems after the round is over.
# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Google Code Jam 2019 Round 2 will begin at 14:00 UTC. Top 1000 participants will advance to Round 3 and win a t-shirt.
Note that there will be no Distributed Code Jam this year.
Let's discuss the problems after the round is over.
Name |
---|
One thing I hate about google code jam — before every contest I spend 10 minutes trying to figure out how to enter. This time I could not figure it out. Does anyone have a link to the elusive contest dashboard?
Even I am trying to figure out the same. Every time before contest they seems to start the countdown. This time it's just link to past problems.
There's a Compete now buttun on the page: https://codingcompetitions.withgoogle.com/codejam I think that should be okay
Got the link: https://codingcompetitions.withgoogle.com/codejam/round/0000000000051679
Hmmmmm
Is this a sign that I am going to load the dashboard for 10 minutes before I can start?
Anyone here getting round loading...? It seems like mine webpage is not loading...
EDIT: worked after I switch to firefox from chrome
Posting a picture doesn't work
Is this the same issue? I didn't try switching browsers, but it just started working after a few minutes.
yes, I only waited 2 minutes, so I don't know if it started working. But you still lose a few minutes waiting tho..
What is the intended solution for A? I had a $$$O(n^3 \log n)$$$ solution that got TLE.
Check the "critical weight ratio" between two molecules in O(n^2) The answer is the number of different ratios, so use set or something to get O(n^2 log n)
Suppose the two metals have weight 1 and $$$m$$$. For every pair we can find $$$m=m'>0$$$ such that the two molecules will have same weight (if no such $$$m'$$$ then their order is fixed), i.e. if $$$m<m'$$$ or $$$m>m'$$$ then the ordering of some molecule pair will be changed. Check the number of intervals of (0,inf) separated by $$$m'$$$s, which is number of possible $$$m'+1$$$
Represent each molecule as a point, now we want to know the number of different directions of type
\
parallel to some segments between pointsIs this just me or is it the worst Code Jam round ever existed?
Two maths tasks (well, actually one, since I'm pretty sure you can't solve C.small without A.small); and in addition one fine-tune until it works interactive task.
Got 467th with A.small, B and C.small for a T-Shirt; which is all I needed from this round.
Well I think there are probably worse rounds than this (in particular I think last year's Round 2 was worse than this year personally), actually I didn't think it was that bad. I am not sure about B since I didn't have the time to pass it (though as you said I believe it's just fine-tuning some constants). I think A, C are just glorified standard problems (C is just finding fraction with smallest denominator between two fractions), but D is actually not bad (at least I think there is some idea involved, though it's not hard and I was frustrated at some point due to missing stupid cases in my code).
I think I don't really expect much from GCJ qualification rounds like this anymore so it really wasn't as bad as I expected.
Actually, C-small is not all that mathy. You can just check all answers between (1, 1) and (200, 200), each in linear time. Don't even have to prove it to get it working.
Fair enough, I haven't realized that. I guess I just solved the non-existing subtask for O(N), since in A.small I just computed for every permutation the equation C1 * y < x < C2 * y; and I copy-pasted that code into C.small and iterated over x, finding smallest y for it (if any) in O(1).
why not (1,1) to (100,100) as mentioned in que?
The inputs (atom counts) are guaranteed to be in [1,100]. This does not mean that the outputs (atom weights) have to be in the same range.
What happens for the input with molecules (1,99), (2,1), and (1,100) in this order?
Well, the fraction $$$\frac{a + c}{b + d}$$$ is definitely between $$$\frac{a}{b}$$$ and $$$\frac{c}{d}$$$, so we have a 2x upper estimate. Turns out it is a tight bound, as the least denominator fraction between $$$\frac{98}{99}$$$ and $$$\frac{99}{100}$$$ is $$$\frac{197}{199}$$$. See Farey sequence or Stern-Brocot tree for formal or visual explanation.
"Not all that mathy"
"See Farey sequence or Stern-Brocot tree for formal or visual explanation."
You can choose only one.
Nah, I choose both.
To have the test set C1 accepted, you just need a bit of intuition on the bound.
To understand what is going on, and solve C2, yeah math will help.
Everything is simple if you know it in advance. I read about Farey sequences. Even Farey himself just conjectured without proving this simple (a+c) / (b+d) expansion.
True.
Regarding C1, in fact, it gets judged immediately, so you can just try some bound which fits into the time limit, and correct the bound if you get WA/TL.
I agree, when I saw how straight forward, classical and pure math C was I couldn't believe my eyes. I never had time to learn about continuous fractions cause they seemed useless in decent contests
PS: This round was complete shit but I now remember last year's round 3, so I guess this is not really worst of history but definitely in top. The quality has been pathetic lately in GCJ
+1 The hard part of solving C boils down to finding the "least denominator" fraction between a/b and c/d. This is done with Continued Fractions / Stern-Brocot and is not nice. Ended up copying code to do this part.
Here is a recent(ish) task which requires the same technique.
I really hope that the solution for problem B is nice, because otherwise fine-tuning is, indeed, not fun either.
You don't need this stuff, if segment is rational.
Here is elementary solution, for that case.
Let's consider we are solving problem for $$$[\frac{p_1}{q_1}, \frac{p_2}{q_2}]$$$. We also allow fractions like $$$\frac{1}{0}$$$ as positive infinity.
Here are three cases:
In fact, this doing the same — we need to find first place, where continued fractions differs, and cut answer here. There is much harder solution, which constructing this fraction from the other side, and it's harder to understand and have complexity problems in easiest implementation. Here complexity is obviously O(log n), because it's equivalent to two Euclid algorithms in parallel.
The main problem with this solution — it have nothing to do, when we don't know borders in advance, but can check if point is to the left of out segment, inside, or to the right. While harder solution can do this too.
sorry, ignore it
2/11, 17/35 < 1/2
"Obviously, solution is same up to adding it back" — how is this obvious? If all you want is to find a rational number between those 2, then yes, but if you want to minimize either its denominator or it numerator, then it definitely doesn't look obvious (minimizing y in x/y doesn't imply minimizing qy in p/q+x/y=(py+qx)/qy. Second problem: why doesn't subtracting p1/q1 repeatedly from p2/q2 overflow at some point? This is definitely not trivial to see either.
You may have misread this; notice that the thing we subtract is always integer and thus doesn't cause overflow
Ohh shit you're right. That clears both my concerns. Sorry
What happens for $$$0/1$$$ and $$$1/2$$$ in your solution. I know it's not hard to find the wanted fraction in that case but I wanted to ask because you mentioned it's the full code and I'm not sure if I'm missing some point.
It becomes $$$[2/1, 1/0]$$$, after applying third rule, than $$$[0/1, 1/0]$$$, after applying first, here anser is 1. By going back, it's converered ti 3, and than 1/3. Seams to be correct
How is it that when you are minimizing the numerator you are allowed to do step 1? For me step 1 only really makes sense when you want to minimize the denominator.
Fact is answer is the same if you minimize numerator with denominator being tie-breaker (it is quite easy to see why)
Could you please explain more?
Suppose fraction in [l; r] with smallest denominator is a/b and one with smallest numerator is c/d. Note that c/d <= c/b <= a/b, hence c/b is faction with both smallest numerator and denominator in this segment
Damn, I spent the entire time trying to figure out why my A wasn't working. Anyone have the solution? Here's what I tried: find every ratio which would make A and B equal for every possible pair A,B of molecules, and perturb it by a very small amount in each direction, then try all of those slopes. But it didn't work... I hope it wasn't just due to not enough precision. https://gist.github.com/VitamintK/bf43a1191f0278ceca2d8cd89ec1fc18
Answer is number of different positive ratios +1.
damn, it was that simple? Just for the sake of closure, do you have any idea why my solution didn't work? It's just doing some extra work on top of that. Or is it probably just floating point imprecision?
I got A small+large, C small and D small for 519. For C large, I reduced the problem to finding a point with minimum x and y, such that m1 <= x/y <= m2, but wasn't able to get a good solution when m1 and m2 are fractions upto sizes of 1e9. Can anyone share the idea for this?
https://codeforces.me/blog/entry/65500?#comment-496352
UPD: well, a more detailed explanation is that we want to find a point with the smallest $$$x$$$ which is inside an angle whose sides are two rays from the origin (say, the lower one is $$$y/x=p_1/q_1$$$ and the other one is $$$y/x=p_2/q_2$$$). One can see that a point $$$(q_1 + q_2, p_1 + p_2)$$$ is ok for us, so the upper bound on $$$x$$$ is $$$q_1 + q_2$$$. Now we can find the smallest such $$$x$$$ using binary search and comparing the numbers of lattice points below these lines, and this is why I dropped the link
UPD2: don't read my comment, it contains a complicated solution comparing to other comments here which are more related to your question
Wait, I thought about binary search, but if an integer point between the lines exists for x = m, it may not exist for x = m+1, right? Is it some modification to the procedure?
Edit: all right, I see. In binary search, we just compare lattice points using the code linked. Leaving this here for anyone else having the same doubts. Thanks!
I also calculated m1 and m2 in A small. For C large, I was trying to find the first x, such that ceil(x/m1) != ceil(x/m2). I did this using binary search. The final answer will be according to me x, ceil(x/m2). But I kept getting WA, is there any logical flaw ?
C large seems to be classic? Is that well-known? I would be very disappointed if that happens.
Yes here
I was thinking in the lca of stern-broccoli trees.
Did someone else didn't notice that there are actually 4 tasks? Contransmutation was hidden because of scrolling.
I didn't, until 30 minutes had passed. Although, I couldn't solve it anyway haha. But really, they clearly shouldn't have changed their interface.
If anyone cares about B – here is my "solution" (seriously, how is this 23 points):
my solution even simpler:
first 60 days, put "1" into the first 12 vases. Each got 5 votes.
Then ask 8 other vases for the minimum. Find minimum here we put our vote. All others vases put "1".
The overall idea was those first days will not give us any information about distribution, so there is no point to ask. Let's just remove more than half vases by ourselves, then ask remaining part what is the distribution. And other votes put using the knowledge of the distribution.
Gg I wanted to code this and submit in the last few minutes but failed to do so by a few minutes.
Yeah, I got too scared and decided to overkill it a bit, by doing this approach multiple times.
I still don't know what they expected us to do for 23 points — prove that this approach will have enough probability of working?
Are you saying it’s too easy? If you look at the scoreboard, lots of people struggled with it. The problem is pretty open-ended and there are many different things you could try which may not work.
Also, it’s not like your solution is that “simple”.
Yes. It requires no algorithmic thinking in my opinion. I disagree that the problem is pretty open-ended, as I think there is pretty much one path forward in this problem (make certain vases lose, look at distribution, repeat); and the fact that a lot of people struggled with this task might just be the evidence of that you actually had to fine-tune the constraints right (interactive tasks also always add difficulty for free with additional complexity in debugging). Even if I'm wrong and there are other realistic approaches to take in this task, I don't think that compared with other tasks this idea is worth 23 points. And if they also thought the same, and decided to give more points just for the fine-tuning, then it's all even worse. The official solution contains a paragraph on fine-tuning and that you need to do it, but you can write a simulator, and just automatically fine-tune it; so maybe that's why they added more points to this task, but how many people actually did it and not just selected values at random until it worked?
It was by far the hardest problem for me. It seems like you stumbled upon the intuitive strategy quickly, but it was not at all clear for me that you should separate the vases and not treat them equally, or that picking your vase at the beginning is not likely to work.
I agree that the hard part is not really algorithmic, but that doesn’t mean it will be trivial as a problem.
Same here, actually. I implemented the solution of picking one vase, polluting all the others and just putting one of my tokens in the "picked" one in the last day. That passed 10 out of first 20 sample sets.
Then I tried still picking that vase, but instead of putting 5.1 tokens in each; put 4 instead, query all vases and then top-up the smallest. Same percentage. I reckoned this is because I reduced the pollution size by 1 token and the query is so large, that if I do my final query at 76 days each, it can be completely incorrect at the final day.
From this the only logical choice I could see is reducing the problem by making some vases irrelevant and thus making the final query as late as possible.
So, I think I went through all of the things you mentioned, so I'm genuinely curious if you could get stuck somewhere, or one of these transitions weren't as obvious as they seemed to me.
Maybe there is just one path forward. I wrote 300 lines and didn't get more than 75%.
I think that you dislike this task for wrong reason. You guessed intended solution right away. I don't want to use word "correct" solution, because I don't see how this stupid thing is better than anything else.
I don't understand people saying "you could write generator and check if this idea is correct". Yes, I can. I even did it. My idea wasn't correct. So, is it a guessing contest?
I hate this kind of tasks.
I hated these kind of tasks ever since IOI 2010. But they're so common at this point, I think I got tired of complaining about them, as long as they don't let contestants wander somewhere of with no chance of actually solving the task.
If I take the task language from IOI 2010 for example, you're describing exactly my thoughts on that. But that task was actually open-ended, and you could do pretty much anything, whether here I felt that your options are very limited by types of queries.
It's very possible I just accidentally stumbled upon solution and still can't come up with any other approaches to this problem because of tunnel vision; so I'd really like to hear yours or ksun48 thought process while solving this task; and I'll concede that I'm wrong and there were indeed multiple paths of solving this with no way to know in advance what would actually work.
In which case I'll remove my complain that the task gave too many points and transfer it to a complaint that such a task exists for exactly the same arguments you mentioned.
My impression is that this is a marathon task rather than an algorithm task.
I'm really curious whether the organizers have a proved solution for it. Running it and seeing it working doesn't prove shit about the variance or the distribution of the probability.
I've only once set a randomized algorithm-based problem and I had to let the limits significantly looser than what appeared to be the success rate of my source, up till a level where I could prove the probability for my program to get AC. I did that only because I find it setter's duty to make sure that a proved approach works.
I can't see how one can model this sort of behavior up till a level of proving that the probability that the algorithm will fail less than 10% of the inputs is really low. And I'm more certain that this problem shouldn't exist than they can be of their solution's correctness if there is no such proof
There was a fun probabilistic problem in GCJ round 1* about 4 years ago: you have some random permutations generated by 2 generators, where one of them is uniformly random; determine which generator generated which permutation, with at least [%] accuracy.
The solution was to run each generator locally, notice that the non-uniform one prefers permutations with specific properties (like $$$P_i = x$$$) and use these properties to estimate a probability that a permutation was produced by the non-uniform generator. That was a very "hey, as long as it works" solution, but interesting nonetheless.
Sounds interesting, yet inappropriate. This pretty much supports my point that these guys don't seem to care about proofs. In my world, it generally matters more to do a proof than to come up with lucky approaches that sometimes work because that makes the difference between blindly following your intuition and thinking and getting out of your comfort zone. You can make up all sorts of problems that still require some decent idea and nice intuition, but that doesn't make them algorithmic puzzles, especially not if the solution is not clearly a correct one
Sure. In serious rounds, for a yolo problem like this, I would require proof that the intended solution can't fail depending on implementation (picking slightly different properties). If it's just round 1A and solving everything except this is enough to pass, it's not a big deal.
I don't think the situation is as complicated as you make it out to be. For a given program, when you run it on a test case, it will succeed with some fixed probability $$$p$$$, independently of the other test cases (assuming the judge RNG isn't very bad). So, as mentioned in the editorial, the number of successes among 250 test cases has a binomial distribution, with parameters 250, p, and the probability of WA is $$$f(p) = \sum_{k=0}^{224} {250 \choose k} p^k (1-p)^{250-k}$$$. In the limit $$$250 \to \infty$$$, by LLN $$$f(p)$$$ would tend to 0 for $$$p > 0.9$$$, and 1 for $$$p < 0.9$$$. Of course we are not in this limit, and so there is a non-negligible probability of getting AC with $$$p < 0.9$$$ or WA with $$$p > 0.9$$$. For reference, $$$f(0.89) = 0.65$$$, and $$$f(0.91) = 0.25$$$.
You might wonder how the judges would prove that their solution has $$$p > 0.9$$$. Maybe they could do this by explicitly analysing their solution, using some bounds, maybe dynamic programming; I don't know. But in practice there's no need to. If you run your solution on $$$n$$$ test cases, and observe a success rate $$$\hat p$$$, then $$$\hat p$$$ has expectation $$$p$$$ and standard deviation $$$\sqrt{p(1-p)/n}$$$ (for reference, this is $$$0.3\%$$$ for $$$p = 0.9$$$, $$$n=10^4$$$). If you observe $$$\hat p = 0.95$$$, which they seem to have done, then you probably have better things to worry about (by CLT, the probability of $$$\hat p$$$ being more than say 10 standard deviations from $$$p$$$ should be very small). Indeed they claim they have a solution with $$$\hat p > 0.98$$$ (for reference, $$$f(0.98) < 10^{-10}$$$).
Of course, the situation would be entirely different if there was a possibility of constructing malicious test data.
Anyway, this problem seems to be more or less orthogonal to most of competitive programming, but that doesn't make it a bad problem in itself.
Alright, so I haven't worked much with CLT yet, but I pretty much understood your point, and I'm sure enough of 2 things:
That you can't conclude anything practically, not even if you have a perfect RNG (which I believe is a decent assumption)
"By CLT" — as you, yourself, mentioned this works for N -> inf. I'm not saying that I wouldn't be willing to bet all my money for that — I'd do that for Goldbach Conjecture as well. Practice doesn't prove anything, and any practical experiment you'd do is shockingly independent from future ones, so there's really no way to correctly, formally conclude anything this way.
You're basically saying that in practice there's no need to prove. I sort of agree with that on the contestants' side. On the setters' side though, it's their duty to prove the solutions they have. Why? Because if they were free to author anything that happens to work nothing would be fun anymore. When thinking about a problem, you're reasoning and sometimes letting go ideas that seem unprovable, because you must have certain mechanism of choosing which directions of thinking not to pursue.
I don't think this is a fair comparison. In the case of estimating a fixed probability, running more trials makes you increasingly "confident" in the result, no matter what your priors are.
With something like the Goldbach Conjecture or Riemann Hypothesis, this is not exactly true. I wouldn't say that verifying some large amount of finite cases can quantitatively push your probability that the Goldbach Conjecture or Riemann Hypothesis is true. There could be some legitimate reason why the first counterexample is very large.
However, if the probability for this problem is actually $$$p = 0.8$$$, it actually means that our trials were very "unlucky" and more trials would most likely have convinced us of this.
Mine was overloading 16 vases for the first 95 turns and looking at other 4 vase sizes trying to resolve tie on day 100
I used the same approach with checking the percent. You was lucky enough to pick exactly 12 vases. If you picked less than 12 or more than 15 vases, the algo wouldn't work for 90%+ accuracy.
I wasn't lucky. I never tried to use their generator. When I finished writing code to choose the best value, I just made 4 or 5 submissions with different constants. And 12 vases passes.
The only logical way to solve this task (even from the first submission) was to write a wrapper of console interaction, then write two realizations of the wrapper interface (a real console interaction and a fake with test generation), which allow you to generate 1 mln tests on the fly and check them.
And then start writing complete bullshit like yours, looking at the percent of successful attempts, until it gets %91+.
The current GCJ interface made me wonder for quite a while: what could be the rationale for such design? I followed the thread on Google Groups where the organizers answered some questions about it, but I still don't get it.
Maybe Google is already captured by robots? And they use the contest to try their devious designs on us?..
At least the problems themselves are still made by humans, and interesting enough to take part in spite of the strange interface. Thanks to the setters for holding their ground!
If I open the scoreboard in Chrome iOS, in landscape mode I can see exactly 0 scores. Because the non-scrolling top part takes all the screen and no amount of zooming or scrolling can help me.
I too can assemble a list of what is wrong for me in the current interface, but after following the thread linked above, I see there is no point in doing so here.
Every freaking time I have to modify the CSS to get more space for the problem statements. It really amazes me how a company like Google can suck that much at designing UI.
Perhaps one point from the thread is worth repeating here: if you want the criticisms to be heard by the Google staff, take the discussion to their platform.
James Damore's case at Google could provide some insight into the 'what'. Specifically, it seems like midwits at (among other places) middle management focusing on non-issues to hide the fact that they're midwits; this GCJ rebuild 2: Electric Boogaloo, a commitment to social justice and the recent comedy with Google's AI Advisory Board (it was dismantled about as quickly as it was formed, after a protest) are just some examples I've noticed. As Google got big, it became less of a place of excellence.
Did anyone face issues with bias in randomness for p2 (the interactive one). My solution worked on the local grader every time I ran it with a random seed. I submitted it 5-6 times but all gave WA. To fix this I just generated a random permutation to shuffle the the vases before outputting it. This shouldnt theoretically make any difference but with this I got AC which leads me to believe that there is some issue with the randomness. AC Code: https://pastebin.com/hwUYZHdp WA Code: https://pastebin.com/UMiNBkkv
I have same problem, but I added permutatuion only after your comment. Now I have AC in practice and feel extremely frustrated, because I didn't qualify with correct solution of B.
I think that such this situation is dishonest, because test was obviously generated manually, and problem says that test is generated randomly
I really don't think that the test was generated manually. I would say its because of fault in randomness and how they pseudo-random and not totally random. Also you could try to appeal the decision. Maybe that might help.
I don't know whether this was due to randomness, but my solution was supposed to pass about 20% of the time, and I made around 30 submits, WA...
Had the same problem. My solution gets AC 9 / 10 times locally (using the testing tool with a random seed) but unfortunately the data they used for testing was one of those 1 / 10 bad seeds.
After the competition I added in random permutation of all the urns and on the server I now get AC around 50 % of the time. So they used a really unfortunate seed for me.
Oh well, got < 1000 so still qualified for the next round. But I still do feel a bit cheated on problem B.
Me during the contest...
Feels bad to solve nothing :(
I followed pretty much the exact editorial solution for New Elements Part 2, but I kept getting WA in contest. Can someone check out my code?
https://ideone.com/gOO7Xn
And congrats to everyone who qualified for Round 3!
where do we find editorials ?
It's an Analysis tab right next to the problem.
Alternatively, I would appreciate if someone either (a) gave a break case, or (b) the test data from GCJ to find the break case.
I just ran a few tests, so I don't now why your program gives WA. Here are some cases:
Hope it helps
Can someone clarify the solution to problem D? The simple cycle case seems incorrect in the analysis. Consider a simple cycle with lead as one of the nodes. (The second transformation just pushes lead to a useless node.) You would not create an unbounded amount of lead in this case as you would push the amounts around the cycle of metals without creating more metal.
If anything, perhaps I'm misunderstanding the editorial or problem? I'm very confused at this problem as my solution matches my brute force solution. But neither pass the data. :(
Yes, $$$Q'_L$$$ (as in the editorial) can have one cycle that contains vertex 1. It should be handled. The rest of the editorial looks fine to me.
Thank you for the help! I figured it out. My issue is I reduced my metagraph for no reason (removed parallel arcs). My sim also did this, so both were wrong in the same way. RIP t-shirt... :(
The editorial seems to just be missing explanations of the special cases (cycles that don't branch).
During contest I tried to test my solution locally using testing tool, but it just hanged without any output. I thought I misunderstood IO format (because it's not easy to understand it for interactive problems, and something is different every time). At the end I gave up and started to work on problem C.
But after contest I found that test script hanged when it tried to print to stderr around test case 200. I'm not sure why, maybe there is some length limit for subprocess.PIPE? I modified script a little (I removed stderr=stderr_pipe on line 35 in interactive_runner), and it started to work. Having working testing tool I was able to find quickly correct parameters for my solution.
I also had that hanging issue, I just redownloaded and it worked afterwards maybe they had fixed something in the meantime.
Is there any way to get the testdata ? I'm getting wa on D but I couldn't find any mistake.
Ok , found it.
wait, did you found your mistake or testdata? If testdata, where can we find it?
Found my mistake, not the testdata
My solution to D seems simpler than the official analysis, and doesn't require computing strongly connected components. Here's a description if anyone is interested.
As in the official analysis, work backwards from lead to find all metals that will generate lead, and delete/ignore all metals that don't (and the corresponding edges).
If you start with no metal (after the deletions above), the answer is clearly 0. Otherwise, you can product at least one gram of lead.
Check if lead can be used to create more lead: starting from lead, follow edges in the graph until you get back to lead, to a dead-end, or to a metal which still has more than 1 out-edge. In the last case, you can produce more than one gram, and they can each produce a gram of lead, so the answer is UNBOUNDED. Otherwise, a gram of lead can produce at most one gram of lead, and so there is no point transforming lead once it is produced.
For each metal for which you start with a positive amount, determine how much lead it produces, using recursion with memoisation. Keep track of which metals are on the recursion stack, and if you cycle back to one of them, the answer is UNBOUNDED because you can start with a gram of that metal and produce both a gram of the metal and a gram of lead.
In case anyone is interested, here is a variation, where 3. and 4. are slightly simplified:
As above, run backwards BFS from lead to find metals that produce lead. Delete the rest.
Run forwards BFS from metals with non-zero initial amount, to find metals that can occur. Delete the rest (and their edges).
For any non-lead metal that cannot be produced by anything else, push its grams forward, then delete it. Check if the resulting metals can still be produced, by keeping track of in-degrees.
If there is still a node that produces more than one metal, the answer is UNBOUNDED. Otherwise it is the total number of grams left.
where are the official solutions located? I can not find it
There is a bar above written as 'Show round overview'
Tell this to the codejam ui designers