I was trying to solve this problem from the gym and I struggled to find the solution. Finally I came up with a very interesting data structure capable of handling any subtree update and really don't know if someone else has seen it before, but I will post it here since I could find its name (if it has) in google. My solution was able to solve the problem with O(N)
memory, O(N)
in creation, O(sqrt(N))
per query. When I saw the editorial I saw that it had another interesting approach with an update buffer which solved the problem in O(Q*sqrt(Q)*log(N))
, for my surprise my solution was O(Q*sqrt(N) + N)
summing the creation and queries, and I saw conveniently Q was 10^4 so probably the author didn't knew about my approach.
After analyzing my approach and reading one of the comments in the same problem, I saw that the problem could be also solvable using Square Root Decomposition in the Euler Tour Technique. Then I analyse my approach and it seems that they are 90% the same, but the difference is in how the decomposition is stored. As some can see the relationship between KMP and AhoCorasick (Matching in Array — Matching in Tree), I see this as (Sqrt Decomposition Array — Sqrt Decomposition in a Tree). So, after posting this structure I search in what cases is my data structure better than the Euler Trick, then I though of a new feature that can be implemented. I found it, it can dynamically change (be cut, and joined). Well, I probably have abhorred with this story, so lets go to the point.
Fragmented Tree
Construction
The idea is not very hard, first do a dfs to compute subtree size and depth of each node. Order all nodes by depth in reverse order, this can be done in O(N)
using counting sort. We will now create what I called fragment, we will collect small tree-like portions of the tree and assign them to fragments. The subtree size will be modified in each iteration it will count how many subtree nodes are not inside assigned to a fragment. So, each time we start with a node we shall update its subtree size, after that iterate its children again and counting how many children are not assigned to a fragment, after this count reaches sqrt(N)
, we create a special node and assign the children to this new node and remove the children from the previous parent.
for(int u : orderedNodes){
SZ[u] = 1;
for(int v : adj[u]){
SZ[u] += SZ[v];
}
newAdj = []
temp = []
spNodes = []
int sum = 0;
for(int v : adj[u]){
if( sum > sqrt(N) ){
int sp = createSpecialNode(u, temp);
spNodes.push_back(sp);
temp = [];
SZ[u] -= sum;
sum = 0;
}
}
adj[u] = concat(temp, spNodes);
}
After this I will end up with sqrt(N) groups.
here each special node needs to be able to answer any query or update in O(1)
. Since the fragment is very small [sqrt(N)]
if O(1)
is not possible, any other structure used in that subtree will be relatively efficient since it will be sqrt(sqrt(N))
or log(sqrt(N))
, although the time constraint will be very tight.
So, after this decomposition, we create a new adjacency list (*adjacencyList*) with this new nodes in O(N)
, and there will be sqrt(N) extra nodes. After that another adjacency list (*subAdjacencyList*) will be created in O(sqrt(N))
with the tree only consisting of the special nodes.
Query
After the construction each query of subtree can be easily answered will be answered with a specialized dfs. The dfs will start on u and do one of two. If the node is special, then query the fragment in O(1)
and recursively add the answers returned by dfs(specialChild) using the subAdjacencyList. If the node is non-special node then apply any pending update in the fragment this node belong and query each individual node recursively using adjacencyList. It is not hard to prove that the fragment u belongs will be updated completly in sqrt(N)
[as lazy propagation] then it will visit at most sqrt(N)
nodes in that same fragment and then it will query at most all the fragments. This leads to 3 time sqrt(N)
leaving a complexity of O(sqrt(N))
. This code is something like this:
// G : fragment u belongs
// leader() : returns the leader of the fragment
Long getSum(int u){
if(leader(u) == u){
int fragId = fragmentOf[u];
Long w = frags[ fragId ].queryFragment();
for(int g : subAdjacencyList[ fragId ]){
w += getSum( frags[g].leader );
}
return w;
}else{
frags[ G[u] ].updateAllNodesIfPendingChanges();
Long w = u >= N ? 0 :V[ L[u] ] ;
for(int v : adjacencyList[u]){
w += getSum(v);
}
return w;
}
}
Update
If the update is in one subtree it can be done similar as the Query function with lazy updates in the fragments and active updates in sqrt(N)
nodes. If the update is in all the nodes that are at distance L from the root (as in this problem), it is slightly different. For that we only need to iterate each fragment and update a value per level we have in that fragment (at most of size sqrt(N)
) like this valueInLevel[L - leader.level]
and update the value representing the whole fragment, in this case fragmentSum += nodesInLevel[L - leader.level]
. this will be done in O(1)
time, for each fragment. This later update will be seen like this.
void update(int LVL, Long v){
V[LVL] += v;
for(Fragment &f : frags){
f.update(LVL, v);
}
}
Cut/Join
Everything above is sqrt(N)
per query, cut/join in a node is not that hard to do. For cut we will only split one fragment and add a new parentless special node. And for join we will assign a parent to that special node. This sounds as O(1)
, but since we need to update the old(in cut) or new(in join) parent of the cutted node this will probably cost sqrt(N)
. This sounds amazing, but there is one problem, each cut creates one special node. and the complexity of this whole structure is O(specialNodeCount)
for query and update. So probably we will loose the efficiency with many cuts. To solve this we take advantage of the O(N)
creation time. Each time the whole tree has been cut sqrt(N)
times, we will rebuild the tree in O(N)
, so that it will recover again the optimal amount of special nodes.
Notes: The whole code can be seen in my submission to the problem, Any tip or something wrong in this post, just leave a comment. And if someone has seen this before, please provide me the name and some sources so I can read a little bit about it, for now i am calling it Toribio's decomposition.