Throughout the last several weeks I have tried to invent (and solve), a generalization of segment trees (although now it doesn't fit to name it "segment trees"... see for yourself). The results are very few (and not good, if you ask me), but the problem was very appealing and fun to think about.
The purpose of this blogpost is to share my experience with this problem, in order to plant curiousity in at least a few people, and also to let bright-minded users tackle this problem together, and hopefully achieve better results. Since I want to share my experience, I will also put some possibly challenging parts under spoilers, if you want to attempt it yourself.
So, there is no tl;dr. Sorry :)
The Motivation and The Problem
As the title suggests, the motivation was from inspection of segment trees. We can define the following problem:
Given $$$n$$$ elements, initially $$$0$$$, and $$$m$$$ subsets $$$\forall i \in [m], S_i \subseteq [n]$$$ (where $$$[n] = \lbrace 0, \ldots, n-1 \rbrace$$$), we need to support the following:
- Update $$$(i, x)$$$: add $$$x$$$ to $$$a_i$$$.
- Query $$$v$$$: return $$$\sum_{i \in S_v} a_i$$$.
Segment trees define the $$$m = \binom{n+1}{2}$$$ subsets implicitly — as ranges. For some "magical" reason, when we have ranges, we can define $$$n' = O(n)$$$ elements, and subsets $$$S_i, T_i \subseteq [n']$$$, such that an update $$$(v, x)$$$ requires adding $$$x$$$ to all $$$a_j, j \in S_v$$$, and a query $$$v$$$ is equal to $$$\sum_{j \in T_v} a_j$$$ — but with the bonus that $$$|S_i|, |T_i|$$$ are all $$$O(\log n)$$$, so all operations can be done in $$$O(\log n)$$$.
The construction is just setting $$$n'$$$ as the number of nodes in the segment tree, setting $$$S_i$$$ for every element as the set of $$$O(\log n)$$$ ranges that include it, and setting $$$T_i$$$ for every range as the set of ranges that cover it.
I wanted to figure out whether there is a generalization that allows such optimizations for problems where we don't have ranges. Analysis of cases where the subsets are too many and implicit, was too difficult to me. So the problem is as above, where the subsets $$$S_i$$$ are all given explicitly, and then queries and updates are made. So the input size is $$$O(n + \sum |S_i|)$$$.
Note: Segment trees also support different associative operations, such as minimum instead of sum. It was convenient to think about sums, and a lot of the following can be generalized to other operations, with perhaps an additional log factor.
Interpreting as a Graph
We can define a similar problem, and see that it's equivalent (up to a constant): Given an undirected graph, each node has a value $$$a_i$$$, initially $$$0$$$. An update requires us to add to a node's value, and a query $$$v$$$ asks for the sum of values of all neighbors of $$$v$$$.
The reduction from this problem to the aforementioned problem is simple — just define every subset as the set of neighbors of a node. In the opposite direction, given subsets we can define a bipartite graph of size $$$n + m$$$, where each of the $$$m$$$ vertices on one side, has neighbors on the other side corresponding to each of the $$$m$$$ subsets.
I found that, by interpreting the problem as a graph, I was able to make progress.
Solving for a tree
Suppose the graph is a tree. Can we solve this problem efficiently? You can attempt this yourself before reading.
Found Complexity$$$O(1)$$$ per update and query.
SolutionLet's root the tree. We have the value $$$a_i$$$ of every node that we want to support. Let's also define $$$b_i$$$ for every node, initially $$$0$$$, and this will denote the sum of values of $$$i$$$'s children.
Therefore, on an update $$$(v, x)$$$ we add $$$x$$$ to $$$a_v, b_{p_v}$$$, and on a query $$$v$$$ we return $$$b_v + a_{p_v}$$$. Everything is $$$O(1)$$$. The similarity to the segment tree optimization is that, we transformed from updating $$$1$$$ element and querying $$$O(n)$$$ elements, to updating $$$2$$$ elements and querying $$$2$$$ elements.
That was awesome, let's try a general graph
Found Complexity$$$O(\sqrt{E})$$$ per update and query. If you ask me, this is cool, but not very good. Generally this can get up to $$$O(V)$$$.
SolutionThis follows the general vibe of square root decomposition. Let's fix some boundary value $$$K$$$, and call a vertex "heavy" if it has more than $$$K$$$ neighbors, otherwise "light". We will maintain the values $$$a_i$$$ as we run, and also maintain $$$b_i$$$ for each heavy vertex $$$i$$$, which will be the answer once we query on it.
For every vertex, maintain a list of its heavy neighbors, and observe that it is of size at most $$$\frac{2E}{K}$$$, since the sum of degrees is at most $$$2E$$$. On an update $$$(v, x)$$$, add $$$x$$$ to $$$a_v$$$, and to $$$b_u$$$ for every heavy neighbor $$$u$$$ of $$$v$$$. On a query, if it is on a light vertex, iterate over all the neighbors and sum $$$a_u$$$, and if it is on a heavy vertex, return $$$b_v$$$.
Updates take at most $$$O(\frac{E}{K})$$$, and queries take at most $$$O(K)$$$. If we want to minimize their maximum, then by choosing $$$K = \sqrt{E}$$$ we obtain $$$O(\sqrt{E})$$$ on both.
Let's analyze what happened here
The tree solution is significantly better than the general graph solution. Applying the general solution to a tree gives us $$$O(\sqrt{V})$$$ instead of $$$O(1)$$$. So what was this magic in trees that allowed us to cheat like this?
The magic boils down to relating vertices and their parents. Let's generalize this to any graph:
Let's direct every edge in our graph in some direction. Define for every vertex, the vertices to which it has an outgoing edge, as $$$T_v$$$. Think of it as $$$v$$$ "parents". The sum of neighbors of a vertex can be split to the sum of outgoing neighbors, and the sum of incoming neighbors. Now, the tree solution is generalizable:
Maintain $$$a_v$$$ for every vertex as usual, and maintain $$$b_v$$$ for every vertex, as the sum of its incoming neighbors. Then on an update $$$(v, x)$$$ we add $$$x$$$ to $$$a_v$$$, and to $$$b_u$$$ for all $$$u \in T_v$$$. On a query $$$v$$$, we compute $$$b_v$$$ + $$$\sum_{u\in T_v} a_u$$$.
The time complexity becomes $$$O(\max |T_i|)$$$. While this result doesn't improve our situation in dense graphs, it can be useful in different ones — for example, trees.
Minimizing $$$\max |T_i|$$$
Given a graph, let's say we want to find the minimum value of $$$\max |T_i|$$$ among all possible ways to direct the edges. Turns out that it is computable in polynomial time!
SolutionSuppose we want to check whether we can direct edges s.t $$$\max |T_i| \leq k$$$. Create a bipartite graph, with $$$E$$$ vertices on the left side and $$$V$$$ vertices on the right side. We will also have a source $$$S$$$ and sink $$$T$$$.
- For every vertex $$$e$$$ on the left side, add an edge $$$S \to e$$$ with capacity $$$1$$$.
- For every vertex $$$e$$$ on the left side, corresponding to the edge $$$(u, v)$$$, add two edges of capacity $$$1$$$ from $$$e$$$ to $$$u$$$ and to $$$v$$$ on the right side.
- For every vertex $$$v$$$ on the right side, add an edge $$$v \to T$$$ with capacity $$$k$$$.
Now, if the maximum flow on this graph is $$$|E|$$$, then we have successfully directed every edge so that $$$|T_i| \leq k$$$ (the directions can be deduced from the flow), and otherwise, we need to increase $$$k$$$.
The simplest way to do this is with ford-fulkerson, while incrementing $$$k$$$ by $$$1$$$ every time it is necessary, for a total complexity of $$$O(E^2)$$$.
You can also binary search on $$$k$$$, and use Dinic's. This should end up being $$$O(E \sqrt{E} \log E)$$$, with a similar analysis to how Dinic's performs on bipartite graphs.
Another nice part about this is that, we don't really want to minimize $$$\max |T_i|$$$, but rather upto a constant factor, since it only plays part as asymptotics (well, at least I don't care about minimizing it). If we're willing to minimize it up to factor $$$2$$$, then instead of binary searching with arithmetic mean, we can binary search with geometric mean, for $$$O(\log \log E)$$$ iterations: indeed, after one iteration we know the answer upto a factor of $$$\sqrt{E}$$$. In general, after $$$t$$$ iterations we know the answer upto a factor of $$$E^{2^{-t}}$$$, and when $$$t = \log \log E$$$:
$$$ E^{2^{-\log \log E}} = E^{\frac{1}{\log E}} = 2 $$$.
Another interesting fact is that there is a relatively simple formula that expresses the minimum achievable $$$\max |T_i|$$$:
FormulaFor a subset of vertices $$$S \subseteq V$$$, define $$$e_S$$$ as the number of edges in the subgraph, induced by $$$S$$$. Then the minimum achievable value is:
$$$d(G) := \left\lceil \max_{\emptyset \neq S \subseteq V} \frac{e_S}{|S|} \right\rceil$$$In particular, in a tree all subgraphs contain strictly less edges than vertices, so this value is $$$1$$$. Also, in a graph that has no cycles of length upto $$$2k$$$, the number of edges is $$$O(V^{1 + \frac{1}{k}})$$$, and since this applies to all of its subgraphs, this value is at most $$$O(V^{\frac{1}{k}})$$$ (link)
ProofWe will use the definition in the formula spoiler above, and the flow algorithm mentioned above. Define $$$k = d(G)$$$. First, the answer cannot be less than $$$k$$$, because otherwise there exists $$$S \subseteq V$$$ such that $$$ans \cdot |S| < e_S$$$, which is impossible from pigeonhole principle.
Now let's prove that this $$$k$$$ suffices. On the maximum flow graph, let's show that the minimum cut is at least $$$|E|$$$. We will observe every cut as its left side: the source $$$S$$$, a subset of the left side $$$L$$$, and a subset of the right side $$$R$$$. The cut for such $$$(L, R)$$$ is:
$$$ |E| - |L| + \text{(edges from L to outside of R)} + k * |R| $$$Let's fix $$$R$$$ and find the best $$$L$$$. Every vertex in $$$L$$$ has 2 outgoing edges to 2 endpoints on the right side. You can see that, if both endpoints are in $$$R$$$ then it's strictly better to include this vertex in $$$L$$$. If one endpoint is in $$$R$$$ then it doesn't matter, and if $$$0$$$ points are in $$$R$$$ then it's strictly worse to include this vertex. So let's take $$$L$$$ as the subset of vertices whose both endpoints are in $$$R$$$. If you think of $$$R$$$ as a subset of vertices in the original graph, then $$$L$$$ is the subset of edges in the induced subgraph. So:
$$$ |E| - e_R + 0 + k * |R| \geq |E| - e_R + \frac{e_R}{|R|} * |R| = |E| $$$Hence, for all $$$R$$$, the best $$$L$$$ provides a cut of size at least $$$|E|$$$, so all cuts are at least $$$|E|$$$.
To finish with what I covered into this area — the main issue is that, in the case where we first compute this directioning, and then start the queries and updates, this algorithm might be too slow. So, here is a linear time algorithm that directs all of the edges s.t the maximum outgoing degree is at most $$$2d(G)$$$ — asymptotically optimal.
Approximation AlgorithmRepeatedly find a vertex of minimum degree, direct all of its edges outwards, and remove it (and its edges). To implement this in linear time, create lists $$$b_0, \ldots, b_{n-1}$$$, and put all vertices of degree $$$i$$$ in $$$b_i$$$. Maintain a pointer to the first nonempty list. Iteratively, pop a vertex from this list, direct all of its edges that lead to vertices we have yet to visit, and for every such vertex, subtract $$$1$$$ from its degree and move it to its new list. After that, subtract $$$1$$$ from the pointer, and then increment it until another nonempty list is found.
You don't actually have to remove a vertex from its current list, just mark vertices as visited whenever you work on them, and when you pop a vertex from a list, if it's visited then ignore it.
Proof of approximation follows from the fact that, any induced subgraph of $$$G$$$ (by vertex set $$$S$$$) has average degree $$$\frac{2e_S}{|S|} \leq 2d(G)$$$, thus at every step we choose a vertex of degree at most $$$2d(G)$$$ (since it's of minimum degree).
Before I found this algorithm, I had a different one, with a complex proof that it also gets a constant factor approximation. The algorithm consists of an arbitrary initial direction to every edge. Then observe that we can flip an edge an update our data in $$$O(1)$$$. For every vertex we get a query/update on, flip all of its outgoing edges and then operate on it. You can attempt to prove its amortized complexity if you want.
Random Dense Graphs
I'll have to address the issue that we still don't have anything good for dense graphs. Of course, for large $$$n$$$, we're probably not going to get explicitly a dense graph, but still it would be interesting to analyze this.
Let's backtrack to the initial problem (not the graph formulation) — up until now, we were given $$$n$$$ elements and our $$$m$$$ "query subsets" as input, while all of our strategies consist of choosing a new $$$N$$$ (for instance, in a tree we had $$$N = 2n$$$ since we also initialized another array $$$b$$$), constructing $$$n$$$ "update subsets" and $$$m$$$ "query subsets", so that every update requires adding in a subset and every query requires computing the sum over a subset.
Let's prove a depressing $$$\Omega \left( \frac{n}{\log n} \right)$$$ lowerbound for this approach.
ProofI'll try to be rigorous so this might be a bit slow.
For simplicity, suppose we are given $$$n$$$ elements and $$$n$$$ subsets (or alternatively assume we want to solve this case). We can be given $$$2^{n^2}$$$ inputs, and each of them requires a different construction: indeed, let there be two different inputs. Then there exist $$$i,j$$$ such that in one of them, subset $$$i$$$ contains element $$$j$$$, and in the other it doesn't. Suppose towards contradiction that we can give them the same construction. In this construction we can update subset $$$i$$$ with $$$1$$$, and then query subset $$$j$$$. The output cannot be both $$$0$$$ and $$$1$$$, to fit both inputs.
Thus, we need to show $$$2^{n^2}$$$ different constructions. Fix a function $$$f(n)$$$ such that we want all resulting update and query subsets, to be of size upto $$$f(n)$$$ (for the final complexity to be $$$O(f(n))$$$). Let's see how many constructions there can be for $$$N$$$ new elements:
Each of the $$$2n$$$ subsets ($$$n$$$ query and $$$n$$$ update) has $$$\binom{N}{f(n)} \leq N^{f(n)}$$$ ways to be selected, for a total upperbound of $$$\left( {N^{f(n)}} \right) ^{2n} = N^{2nf(n)} = 2^{\log_2 (N) 2nf(n)}$$$.
For different inputs we can choose different $$$N$$$, but this is negligible — suppose all our constructions work with $$$N$$$ bounded by some polynomial $$$p(n)$$$. Hence for large enough $$$n$$$, we have $$$\log_2 N \leq c \log_2 n$$$ (for some constant $$$c$$$). So the total number of constructions, summed up for all $$$N$$$ from $$$1$$$ to $$$p(n)$$$, is upperbounded by:
$$$ p(n) \cdot 2^{ c \log_2 n \cdot 2nf(n) } \leq 2^{ c^2 \log_2 n \cdot 2nf(n) } $$$for some constant $$$c$$$, and must exceed $$$2^{n^2}$$$. Comparing the exponents, there exists a constant $$$d$$$ such that:
$$$ f(n) \geq \frac{n}{d \log n} $$$If we pretend that $$$N$$$ being super-polynomial in $$$n$$$ is not a problem, then don't forget that while the lowerbound is $$$\frac{n}{d \log N}$$$, we also require $$$\log N$$$ work to resolve accesses to such an enormous amount of memory (and yes, technically we also need $$$O(\log n)$$$ time to access an array of size $$$n$$$, so our lowerbound is even worse, but we can sweep this under the rug like any other algorithm).
Furthermore, I believe that any algorithm can be reduced via linear algebra to an algorithm that constructs update subsets and query subsets, but I was unable to prove it.
Anyway, there is also an $$$O\left( \frac{n}{\log n} \right)$$$ construction, so we hit our limit it seems. This follows the pattern that many algorithms use to remove a log factor off complexity, like RMQ.
The almost-awful-but-not-quite algorithmFix $$$L$$$ (it's going to be very small). For every subset of indicies in $$$[0, L)$$$, we're going to keep the sum of elements in this subset. Similarly, we'll do this for every subset of indicies in $$$[L, 2L), [2L, 3L)$$$ and so on.
Upon an update $$$(i, x)$$$, we find the block of size $$$L$$$ containing $$$i$$$, and for every subset in there containing $$$i$$$ we add $$$x$$$. This works in $$$O(2^L)$$$.
Upon a query $$$j$$$, we break subset $$$j$$$ (during preprocessing) to $$$\frac{n}{L}$$$ parts, one for each block of size $$$L$$$. Then it remains to sum $$$\frac{n}{L}$$$ values.
We want to minimize the maximum of $$$2^L, \frac{n}{L}$$$. Choosing anything like $$$L = c \log n$$$ for $$$c > 1$$$ works, for simplicity you can choose $$$c = 2$$$ and obtain the wanted runtime (and even $$$c = 3$$$ for less memory consumption!).
As a final note, this means that I also don't know how to do $$$o \left( \frac{d(G)}{\log n} \right)$$$ for any graph $$$G$$$ (following the definition of $$$d(G)$$$ from before). Can we use this trick to reduce the time complexity below $$$d(G)$$$? (when it is bigger than $$$\log n$$$).