Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 155 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
Name |
---|
Another (IMO very nice) construction for Villagers — Maximum: Do dfs on the tree and sort the vertices by dfs preorder. Let $$$ord[i]$$$ be the $$$i$$$-th vertex we visit in the dfs. Then move villager $$$ord[i]$$$ to house $$$ord[(i + \frac{n}{2}) \; mod \; n]$$$. It‘s not hard to prove that this is maximal.
Hmm, I thought this only works from the centroid, why does the above work exactly?
Essentially, if you imagine the centroid and the subtrees surrounding it, vertices
i
andi + n/2
can never be in the same subtree (as each subtree has at most n/2 elements). So the end result is the same as using the centroid explicitly.If you look at any edge, all the vertices which are on the same side of this edge form a contiguous subsegment of the $$$ord$$$ array (imagine this array as "cyclic"). Let's look at the smaller subsegment, let's say it has size $$$s \leq \frac{n}{2}$$$. Then when we "shift" the entries, let's look at the leftmost and the rightmost element of our subsegment. Since we shift by $$$\frac{n}{2} \geq s$$$, the leftmost element will not be in the original segment anymore. The rightmost element on the other hand is shifted by $$$\frac{n}{2} \leq n - s$$$, we don't move it far enough to end up in the original subsegment again. All other elements of the subsegment end up between the leftmost and the rightmost, so they can't end up in the original subsegment as well. Thus all the villagers from the side we have chosen in the beginning cross this edge.
Can someone please share his implementation for problem A ?
Here is my AC solution
submissions are hidden for this contest. you can give me the code using ideone.com . thanks in advance :)
I pasted in Pastebin.
thank you
Note that the idea is the same, but I used Ternary Search to find the optimal $$$x$$$ for $$$\sum_i |a_i \cdot x + b_i|$$$.
I have an alternative solution to problem B1.
First, we can see that we can choose a subset S of x nodes, such that 2 <= x <= N, and match everyone in this set with another such that no one ends in the position that it had, we can see that in the optimal strategy each edge in the smallest subtree that contains all nodes in S will be counted twice, so the optimal strategy will be always sort by its dfs order and match S[i] with S[(i+1)%|S|] for each i from 0 to |S|-1 (shift right by 1).
Then we can reduce our problem to cut the tree into subsets of nodes, and add their consecutive distances, it will be always optimal select subsets which forms a connected subtree, otherwise we add some edge more than twice. Its also optimal to select as much subsets as possible, because having k of them we will use only n — k — 1 edges from the tree, and each edge will be used twice.
Then we can do the following greedy algoritm: Choose one node and color it black, its adyacents white and so on, like in a bipartite assignment. Then we run dfs from any node (node 1 for simplicity) and when we found a edge such that any of its endpoints are unused, create a new subset with this two nodes and mark them as used, and keep doing this, finally, run a multi-source bfs and add unmarked nodes to some adyacent subset, this algorithm is optimal because the matching that we found is maximal, and because no single-nodes can be in a subset, the smallest size of valid a subset is 2, then the best answer is the maximum matching.
You can see my code here for a better understanding
///
void dfs_color(int u,int c){ color[u] = c; for( auto v : g[u] ) if( !color[v] ) dfs_color(v,c%2+1); }
void dfs(int u,int p){ for( auto v : g[u] ) if( v != p ) dfs(v,u); for( auto v : g[u] ) if( v != p ) if( color[u] != color[v] && ( res[u] == 0 && res[v] == 0 ) ){ res[u] = res[v] = ++cnt; q.push(u); q.push(v); } }
void bfs(){ while( !q.empty() ){ int u = q.front(); q.pop(); for( auto v : g[u] ){ if( !res[v] ){ res[v] = res[u]; q.push(v); } } } } ///
///inside main cin >> n;
///
can anyone please explain why in the editorial of problem A do we have a_u*a_v=-1 ?
There's a small error, it's meant to be $$$a_u a_u' = -1$$$.
Which is another way of stating that we have either
or
(which is the only possible case left in the context there)
I wonder why my solution to problem C got accepted. While I had almost finished the implement of my first thought, I found that I didn't consider the case that the mutation table of a gene may contain itself and I couldn't find a good order to DP. Then I thought that this case may not occur too much times, so I just iterated 10 times to get better answer, and surprisingly I got AC. I even can't prove it!
Does anyone have similar solution or idea about this? This is my AC solution
I can come up with a test that needs to use one cycle-transition easily. It's only beneficial to essentially change prefix and suffix to avoid antibodies with the same gene. For example,
I think if you want to come up with cases where you need to use a single cycle-transition more than 10 times, you'd have to introduces antibodies that would prevent you from using non-cycle-transition. So I think, you'll quickly run out of total length of antibodies of 50.
In problem A, Why is it giving wrong answer for this submission on test 6?
The sum of values of vertices is same in jurys answer and my answer.
You must minimize sum of absolute values.
oh my bad, thank you :)
I finished problem A in a different manner, and sometimes it doesn't even consider the median of b's. I wish someone could show me the proof that the median is good (if you know it, please reply my comment. Very grateful already!). I found my way more intuitive (of course, because I thought of it :P), and I'll describe it.
So we ended up with lots of $$$f(x) = ax+b$$$ functions, where $$$a \neq 0$$$ (if $$$a = 0$$$, throw it away, for it's a constant). This way, there are only two cases:
if $$$a < 0$$$, then for every $$$x > -b/a$$$, $$$f(x) < 0$$$, and for every $$$x < -b/a$$$, $$$f(x) > 0$$$.
if $$$a > 0$$$, then for every $$$x > -b/a$$$, $$$f(x) > 0$$$, and for every $$$x < -b/a$$$, $$$f(x) < 0$$$.
So I put in a vector every pair($$$-b_i/a_i$$$, $$$i$$$) and sorted it. My observation was that, between two consecutive $$$x$$$'s in the vector, the sum of the absolute value of the functions is a linear function itself. So the best answer would be one of the endpoints of this interval. And the only function that stop behaving the way it was when we reach a new $$$x$$$ is the function responsible for this $$$x$$$. For example, suppose we are processing $$$a_ix+b_i$$$ and $$$a_i > 0$$$. So, before we process it, the real addition this function gives to the total is its reflection according to the x-axis — i.e, $$$-a_ix-b_i$$$. The moment we process its $$$x$$$, it gives us 0, of course. After it, the addition changes to $$$a_ix+b_i$$$. That's it.
Maybe I unconsciously did that median thing, but it isn't quite clear to me. Please someone provide a proof!