Meta Hacker Cup Round 3
Good morning! Meta Hacker Cup Round 3 starts on Saturday, November 4th, 10 AM Pacific in under 14 hours! You're automatically registered for Round 3 if you placed in the top 500 of Round 2, or had a qualified bye from Round 1.
Important info:
- You'll qualify for the finals if you place in the top 25 in this round.
- If you place in the top 200 of this round, your T-shirt prize will have a Top-200 badge on the sleeve, unlike those who earned a shirt but were not in the top 200 in this round.
- The round will be 3 hours long.
You can find more info about T-Shirt details on this post. Good luck, have fun, and try not to get spooked by the problems!
Unfortunately, it clashes with the ICPC regional. Couldn't get a top-200 badge this time :(
lol my facebook account got banned
lol my facebook account got banned :((
Maybe next year 😔
"If you can't log in to your FB account, you can send your solutions to [email protected] during the contest and we can judge them shortly after the contest."
The only response i got...
Can anyone register https://www.facebook.com/profile.php?id=61551793797917 please? my account was banned (i did not use it...)?
Pretty please send out some kind of announcement about the starting time next time, I (and some other people) got screwed due to daylight saving time changes :/
Thanks for the contest! Some thoughts (written before the end of the round):
Overall, while I think most of the problems were reasonably nice in isolation, the round as a whole was far too easy to meaningfully distinguish the top 25 competitors from the rest of the pack. We're about 2h10m into the contest and already the cutoff for top 25 is 100 points, with relatively minute differences in time penalty distinguishing the potential finalists. This means that the finalists will be distinguished either by FSTs, which feels unsatisfying (and not especially correlated with problem-solving ability), or by relatively minute differences in time penalty. I think this is a much worse scenario than e.g. MHC 2021 R3, where finalists were determined by the ability to solve at least two out of the three harder problems in the round.
Problem-specific feedback:
A: Pretty good problem. As a matter of personal preference I don't love problems where the time complexity can't be evaluated nicely by hand, but in this case it's easy enough to intuit that the solution will run fast enough.
B: Good problem; not ridiculously challenging, but appropriate enough for its position. It took a bit of thought to reduce from O(N^3) to O(N^2), and the process of doing so was reasonably nice.
C: The idea behind this problem was nice, but the implementation was very unpleasant (obviously, this might be my fault for not implementing in the cleanest possible way). Even after spending a fair amount of time thinking about how to implement in the least edge case-heavy way, my code ended up running nearly 200 lines (again, possibly a skill issue), not counting my template. Heavy implementation is especially frustrating in an R3 where the finalists will largely be determined by penalty time. Because of the difficulty of implementation, I thought this was by far the hardest problem of the round.
D: Fairly standard rerooting problem. Not too bad and easy to dispatch if you've seen the idea before, but nothing particularly exciting either.
E: This problem is pretty neat in isolation, and would be a great problem in a hypothetical contest in which formal proofs must be submitted with your code. Having said that, I think it's highly ill-suited for position E on Round 3. This is partly a matter of taste--the problem feels too straightforward in a sense to be the last problem identifying the finalists--but also because in a round where the key to advancing is maintaining a good penalty time and where using validation tests is costless, contestants are encouraged even more strongly than usual to attempt proof by AC, and those who spent the least time attempting to prove the solution before implementing, running validation tests, and downloading the full input were rewarded with shorter penalty times.
also, second year in a row FSTing out of finals after passing validation tests but still writing a bug that evidently only I had lol
I don't think C is hard to implement: Code
Thanks for sharing! Looks like the solution I implemented was faster than intended, which potentially explains why the implementation is more annoying.
A is $$$\mathcal{O}^*(\binom{k+\frac{n}{k}}{k})$$$ to check some value, so it can be done by hand.
Is problem B similar to 1870E - Очередная задача на MEX?
I messed up during copy-pasting soln from this codeforces problem.
Same. I convinced myself that the solution I read from some comment was quadratic and didn't react fast enough to fix it. Would be the easiest full score :(
This solution works in both problems.
Yes, I just copy pasted my submission(for CF problem) and modified it a bit.
I’ve never felt this dumb, helpless, hopeful, sad, nervous, happy and confused within a 3hr span before :)
Wow, Japan did phenomenally, especially considering that it's currently 5 AM there. It looks like 7 of the top 25 finalists will be from Japan this year.
Why can't FBHC happen 2 hours earlier? Currently it starts 10am California time, and midnight in many parts of Asia (where 1/3 (?) of the world population live and 1/2 of the finalist live)
Thank you for the contests :)
I found the problems of round 2 and 3 cool!
Unfortunate that I got systest on A and missed top 200 by a few places because of that :x
Hopefully, see you next year with an online judge
and feedback after submissionsAlso one fun fact about the problem E. I participated in the contest [Codeforces Round 897 (Div. 2)] virtually, and during the contest, I understood the problem https://codeforces.me/contest/1867/problem/F as today's E (and solved it), then I got that I read problem incorrectly :) and today I have already solved the correct problem :)
In B, I can't believe there weren't any case in pretests (validation input or whatever) where hash 0 comes only from 0 orders. I even contemplated it once while writing, but thought I read it had to be positive number of orders and indeed passed pretests. Such a needless constraint. Glad to get the top-200 badge, but ending the night on a sour note.
Missed Top-200 T-shirt because I thought that with
auto it = s.rbegin()
, we need to useit--
instead ofit++
to iterate in the set. Anyways, learnt some more unknown secrets of C++.I might have the dumbest reason to miss 200-tshirt.
In a codeforces round I changed
To
Since I used ll everywhere but I needed to speedup my program, and didn't wanna replace.
And today I accidentally used this bad typedef instead of the good one, and my D overflowed on 1 test :(.
I think the common solution to D assumes $$$C = \sum_{(u, v) \in E} \min(deg(u), deg(v))$$$ are small and uses that much computation. Apparently, this is not assumed by the editorial, which relies on an extra property that can make implementation a little more boring.
Actually, it's quite well-known that for all graphs with $$$|E| = |V| = O(n)$$$, $$$C = O(n^{1.5})$$$ (Cf. IOI 2009 Regions). I was pretty much sure $$$C = O(n)$$$, but I didn't find an immediate proof and I also thought $$$n^{1.5}$$$ would be fast enough anyway.
Now I see that $$$C = O(n)$$$. Root the tree arbitrarily. WLOG assume $$$u$$$ is the parent of $$$v$$$: $$$C = \sum_{(u, v) \in E} \min(deg(u), deg(v)) \le \sum_{(u, v) \in E} deg(v)$$$. But each $$$v$$$ occurs at most once, so this is just at most $$$2n-2$$$.
This same reasoning works for planar graphs and basically any graph with bounded degeneracy.
Depending on the full testset you were sent, approaches which naively do $$$\sum_{(u,v) \in E} deg(u) + deg(v) $$$ work also passed within the time limit, because (I think) Meta Hackercup gives you a random subset of the full dataset they have made. There were very few tests with high degrees. If you are lucky, no high degree testcase is in there. For example, for me it contained a testcase with $$$\max( deg(u) ) = 500000$$$, while with other people's full inputs, the max was $$$35153$$$, and then something naive passes easily. I don't know how to feel about this. Probably no one has abused it, but it's weird that testsets are not the same level of strength for different competitors.
I don't know if it has something to do with how Meta generates test data, but my code for D which has an extremely obvious bug managed to slip through and got AC
I only pushed one edge into the tree when the question stated the tree is undirected. Initially did this due to most Codeforces problems with similar input formats also guarantee $$$P_i < i$$$, but not for Hackercup. Eventually managed to realise this when testing samples and changed other parts of my code but somehow forgot the input processing part, somehow still passed.
Edit: As some replies have pointed out, the fact that $$$P_i < i$$$ is irrelevant, which definitely explains the whole situation. But still funny how this feels so wrong to do while being completely correct
Do you know how to generate a counter-test case against your soln?
I checked your code, and, it looks like it will get an AC on all test cases generated with this input format.
Your adjacency list is the list that will remain if we delete all parents from child nodes after rooting the tree at 1.
Actually yeah, you are absolutely right. After some thought and mentally proving stuff I did convince myself that the $$$P_i<i$$$ condition is not needed for storing only 1 direction to work.
Thanks for pointing that out!
The condition of $$$P_i < i$$$ is irrelevant for this. It is true in any case that the thing you created represents a tree rooted at vertex $$$1$$$ with edges going out from the root. It is easy to prove by inspecting vertices in a reverse depth order. All leaves have only one neighbour, so their edge is directed towards it, for higher vertices as all edges of its children are already occupied, the only option for its edge is the edge to its parent, and so on.
I wanted to say that I really liked the problems. Based on several previous years I was expecting a contest full of data structure bashing, but the problems were interesting, particularly C and E.
The difficulty was too low, and I feel like it is connected: if you don't have top-tier problemsetters it is easier to achieve high difficulty by asking for harder data structures and overall just a lot of code. But I appreciate the attempt at making more interesting problems.