This year's Bubble Cup first round ended recently. The marathon/challenge problem was GUESSN5 from SPOJ. What were your approaches?
Also does someone know how the checker was created, as trying all possible lies would be too slow?
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 164 |
1 | maomao90 | 164 |
3 | Um_nik | 163 |
4 | atcoder_official | 160 |
5 | -is-this-fft- | 158 |
6 | adamant | 157 |
6 | awoo | 157 |
8 | TheScrasse | 154 |
8 | nor | 154 |
10 | djm03178 | 153 |
This year's Bubble Cup first round ended recently. The marathon/challenge problem was GUESSN5 from SPOJ. What were your approaches?
Also does someone know how the checker was created, as trying all possible lies would be too slow?
Name |
---|
It's easy to see that the answer is correct when for each pair of positions $$$(x, y)$$$ there are at least $$$2 \cdot w + 1$$$ queries where exists one position and doesn't exist the other. Otherwise we can lie in such way that $$$x$$$ will be indistinguishable from $$$y$$$.
The task was to build maxtrix with $$$n$$$ columns and $$$q$$$ rows with elements $$$0$$$ or $$$1$$$, where rows are your queries for a test.
You can notice that necessary and sufficient condition for queries being correct is that there shouldn't be two columns that differ in less than $$$2 \cdot w+1$$$ positions, it's easy to proove.
So the checker can be written in $$$O(\frac{n \cdot n \cdot q}{64})$$$ with bitset.
About our solution.
The idea was to generate matrix for all $$$n$$$ with fixed $$$w$$$. If we have a matrix for some $$$n$$$, then we can try to get matrix for $$$n+1$$$ just by adding a column, but sometimes luck can be not on our side and we will have to add some rows.
And the simpliest idea just try to add random colums 100 times (after each adding check matrix for goodness in $$$O(\frac{n \cdot q}{64})$$$) and random rows (with the number of 1s equal to $$$n/2$$$) was enough to get like 55k.
Then there were some optimizations like our own bitset (two long longs because answer always had less than 100 queries) and heuristics for cleverer generation of columns and many many other stuff, I don't really know what did our final solution, but I can tell for sure that 49k were an easy catch, and then my teammates literally danced with a tambourine to get a next solution, which got 100-500 points less))
Sorry for off topic.
I want to now how did people solve First to meet the spaceship and if somebody wrote online CHT then please tell how did you manage to reach sufficient accuracy?
I solved it without CHT. We can do two sweep lines (one from left and one from right) and maintain a set of the vehicles that potentially can be first at a given point. We will have events for overtaking time of two cars — when one car overtakes another, the latter won't be a potential candidate to be first anymore. The only observation needed is that we can consider only adjacent overtakings of vehicles (by X coordinate). Now everything can be implemented with two sets and only integer calculations.
What an elegant solution)
I had something with convex hulls of points:
For each vehicle located at $$$p$$$ with speed $$$v$$$, put the point $$$(p, v)$$$ on the plane. Transform each query $$$t$$$ to a point $$$(t, 0)$$$ on the OX axis.
Now, who will reach $$$t$$$ first from left? A vehicle to the left of $$$t$$$ will reach $$$t$$$ in time $$$\frac{t-p}{v}$$$, which is conveniently a tangent of the angle between lines: $$$x=t$$$ and the line joining points $$$(p,v)$$$ and $$$(t,0)$$$. Therefore, we need to find all points $$$(p,v)$$$ for which this angle is minimized. If we have an upper convex hull of the points to the left of $$$t$$$, this query is a simple binary search on it.
We sweep all vehicles and queries from left to right, adding the points to the hull and answering the queries using binsearch/two pointers technique. We similarly process the points to the right of the query.
The time complexity is linear after sorting.
Cool!
I can describe my approach, but it was enough to get only about 41700 points, so I'd love to hear about mnbvmar's solution.
It's trivial to know that each pair of vectors must differ in at least $$$2 \cdot w+1$$$ positions. I'll construct solutions for $$$n = 2^p$$$ for some $$$p$$$. If $$$n$$$ isn't a power of two, I'll create a solution for greater $$$n$$$ and just cut out some vectors. I guess, that here was my weakness, I had to use as many dimensions (bits) as for the smallest $$$2^p$$$ which is greater or equal to $$$n$$$ given in the input.
So, it turns out, that it's a good idea to pick $$$p$$$ vectors and print all $$$2^p$$$ xors of their subsets. The condition is that no two subsets can have xor's which differ in less than $$$2 \cdot w+1$$$. We can observe, that if such a situation happens, then also an empty subset differs from some other one in less than $$$2 \cdot w+1$$$ positions (because if subsets $$$a$$$ and $$$b$$$ differ in wrong way, then also an empty one and $$$a$$$^$$$b$$$ (let's say that it's a xor) differ in wrong way).
So, I'm trying to add these $$$p$$$ vectors one by one, and each time I can decide in $$$O(2^p)$$$ time whether the new set of vectors is good. So I'm trying this many times and just compressing data to fit into the code (It's about $$$O(w_{max} \cdot log^2(n_{max}))$$$ bits of data).
The last optimization was to try to add only vectors with exactly $$$2 \cdot w+1$$$ ones. Why? It's logical that the want the resulting $$$2^p$$$ spheres (yes, we're handling with spheres in multidimensional space) to be as close to each other as possible. So, we can assume, that some two spheres will have a distance of exactly $$$2 \cdot w+1$$$, so we can assume that it's the sphere we are currently trying to add (one of the $$$p$$$ "base" spheres) and the sphere with a center in vector with zeroes.