The qualification round starts in 50 minutes. Let's share scores and discuss the problem after the contest. Useful links below.
Judge System
Official livestream
FAQ
Good luck!
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
The qualification round starts in 50 minutes. Let's share scores and discuss the problem after the contest. Useful links below.
Judge System
Official livestream
FAQ
Good luck!
Name |
---|
Is it rated?
this is literally the battle between "downvoting is it rated comments" and "upvoting reds no matter what".
I trust the community and hope it's gonna be a net loss.
A — 2
B — 231,591
C — 1,426
D — 440,332
E — 417,896
Total 1,091,247
I guess we will be around top30. I would like to hear how to get around 1,2kk.
A — 2
B — 239,997
C — 1,616
D — 243,517
E — 419,938
D was simply too big to fit into our TSP solver on our machine.
Which solver did you use? I tried LKH for B but it did nothing in 30 minutes so I decided to write some heuristics myself.
TSP solvers are not as useful as other solvers. Just implement simple simulated annealing with random section reversal and you can get very good results. You can also control how much time you want to spend.
I mean, what existing libraries are a good fit. I can implement some good stuff myself but sometimes it might be better to delegate some work to a professional :)
Doesn't using TSP force you to hold all the graph? like n^2 edges?
1) You can do it with enough memory.
2) You don't have to store that. When you need to know the distance between two vertices, compute it.
3) Or store only best 1000 edges from each vertex.
A – 2 B – 228498 C – 1775 D – 441242 E – 560550
In extended round ı tried so many things for B but no way, 228K.
majk I think your 239997 is max score for B. Could you explain your approach?
Thanks.
Judging by your scores, you probably did the same mistake as we did :) Joining V's into pairs before arranging them into a path is not good enough, one has to create pairs while building the path. It is possible to get 561k+ on E.
Makes perfect sense, thanks.
EDIT: Yup. I've now implemented just the simplest O(N2) greedy solution for E that takes the best next picture and got 550k.
Edit2: I added penalizing for using pictures with a big number of tags, and the score jumped to 565k. That would give us the first place with 10k lead.
These statements like "if i had implemented something i know postfactum to be correct [without affecting other scores] I'd have outplayed everyone by 10k" never stop being funny. Our team managed to increase the score by 65k 10 minutes after the end and 30k more after multiplying a magic constant by 2, but this doesn't necessarily mean that if the contest had last until :45 we'd have been top-dont-rememember-how-much-but-a-little.
I don't see anything funny here. And certainly, I don't claim we would win after 20 more minutes because I implemented what tourist mentioned, not my own idea. I'm afraid we would only get lower and lower in the leaderboard, actually.
Also, insert "let people enjoy things" meme.
Can you give more details about the implementation? Don't you need to search for the best two pictures, which gives N^2 for a single step and N^3 in total? That seems to be too slow given the size of N. What am I missing?
Build the slideshow greedy slide by slide. If the last slide is complete, just find the next one over all images (like considering all them as horizontal). Otherwise, the last slide contains 1 vertical image, so find the second one over all vertical images left, that gives the best score.
Just this idea gets 547117 on E.
Can you tell me on what basis you are selecting the best V pair (i.e. second V )
We were choosing best V on basis of most number of common tags but its not good enough.
He exactly explained that. Don't choose the whole pair at once. Choose best first picture, then best second picture.
I just want to know on what basis you are selecting the "best" picture.
giving the best score so far
Thanks :).Implemented the solution and got 500K+ for E. Can you tell me how to improve more on it!!!
Read editorials/write-ups for similar competitions like topcoder marathon. You will learn new things that would be useful in hash code too. For example, read about simulated annealing. Or watch my video about hash code, link.
Just by curiosity, how much time it took more or less to run the greedy solution on all testcases? (In which PC specs?)
2 minutes per test on a good laptop. We used bitsets to store tags in D and E.
So fast.... We used Python3 and it took us 4hrs on just test case E we used normal sets to store tags. On a good laptop
Maybe
pypy
would help. And yeah, C++ is fast.A — 2 B — 149475 C — 1582 D — 436321 E — 414517
We are the only team among all my friends who sucked so much at B :/
A — 2 B — 64428 — This one was a failure :V C — 1368 D — 388349 E — 323141
That about it~
What was the third test case for?
At least other 3 cases mattered this time. Last year only one test had significant impact on overall score.
It was good to quickly check if your solution broken or not
A — 2 B — 202830 C — 1470 D — 396293 E — 389426
We used a greedy approach to solving TSP. Our solution also involved some amount of randomization. We created the vertical pairs using the concept of choosing two pictures with least common tags.
B — 202734 C — 1439 D — 398968 E — 366042
With the same approach but the Vs are paired randomly.
We used the same approach for pairing and used the same greedy approach mentioned above. Then just chose a random picture to start and add the picture with the maximum score next to it. It was enough to get A — 2 B — 225171 C — 1573 D — 434273 E — 412744 with some more optimizations.
A — 2
B — 202,740
C — 1,764
D — 439,070
E — 417,037
Total 1,060,613
Key to top50 — i7-8750H and a lot of different heuristics.
B had a large number of tags. From whatever limited checks we did, we found no tag repeated more than twice in the entire file. Good enough to write a custom solution where we make a inverted index tags to photos and then for a group of tags we find the photo that shares the as many tags as possible. Since there are not many tags in common, this results in OK enough score.
For C, D, E
We only allow edges between slides that have almost same number of tags and choose greedily.
Our V pairing function was very bad, we paired up images which have almost same number of tags which maximises union of tags of the pair (for a V image A with N tags, find another V image B with [N — t, N] tags where |Union(A.tags, B.tags)| is max)
Ultimately we decided to drop V images from C set.
Good contest, really wish we did better on C, D and E.
How do all you get 230k+ on B? We get our score on B using some greedy, then improved on D/E by a lot but didn't get a single extra point on B.
A 2
B 204630
C 1742
D 436266
E 551892
Some thoughts in B: each tag occurred at most twice, and the intersection between any two photos had size either zero or three. This essentially reduces the problem to finding (or approximating) a Hamiltonian path in an undirected graph (with edges joining the pairs of intersecting photos).
How did you guys choose the first picture?
i am noob can someone submit give me solution link(C or C++)
Here's our code https://github.com/MarinShalamanov/Hashcode2019
.
1> First i separate the horizontal and vertical pics. 2> For every nH (Total number of horizontal images).
3> I do the above the same for vertical images but with taking combination with minimum overlap .
Is it same as we do in the like selection sort method !!
...
...
Google has uploaded a copy of this contest on Kaggle where your solution can be judged. Only question D was uploaded however.
I have produced a solution that is very close to its theoretical maximum, with a reasonable runtime. - https://www.kaggle.com/huikang/441k-in-11-mins
I have also analysed the data for question D. https://www.kaggle.com/huikang/hc-2019q-eda-and-baseline-soln
More comments are available on the notebook themselves. Have fun!