Can anyone explain the centroid decomposition solution?
# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 155 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
Can anyone explain the centroid decomposition solution?
Name |
---|
Let's fix a current centroid (root).
So, if path v - u with length k exists and path contains vertex root, we can split it to the two parts: from vertex v to the root and from root to vertex u. Note: vertexes v and u must be in different subtrees of root.
Next, we process every subtreee of root from left to right (or another fixed order) by simple dfs(vertex, distance, edgecount), and collect map<distance, edgeCount> for each subtree (local), and one global map. Suppose now we are in vertex u, with distance D and edgeCount E, we should check, that global map contains k - D key, and update answer if can (answer = min(answer, E + global[k - D]), and update min in local map. After subtree processing, we merge local map to the global.
Thanks. This works because for the original centroid we would have to process N vertices in our DFS. And after splitting the tree, in the subsequent DFSs the number of nodes to process gets half. And therefore the complexity would be O(NlogN) excluding the map log factor. Right?
I think it's enough to use an array of size k instead of a map if we are careful about reusing it. This way we can get O(N logN) without any hashing (if one wanted to use a hash map to remove the log factor from using a map). This is the official solution and it can be found here.
Where can we submit the problem?
http://wcipeg.com/problem/ioi1112