Segment trees are not as memory efficient as fenwick trees which often becomes the bottleneck of many problems. There has been attempts at reducing the space required to build a segment tree (e.g. iterative implementation takes $$$2N$$$ space instead of $$$4N$$$ space). But there's a very simple solution to this that can asymptotically improve the space complexity (sorry if this is already well-known).
When building the segment tree, we can simply stop the recursion when $$$r - l \leq \lg n$$$. This means the size of the tree is $$$O \left( \frac{n}{\lg n} \right)$$$ instead of $$$O(n)$$$.
Fortunately we can still answer queries (or perform updates) in $$$O(\lg n)$$$ time for most problems. For the internal nodes, we can simply collect the contribution of that node just like usual. For the leaf nodes with range $$$[l, r]$$$, we will have to scan through the original array in $$$a[l \cdots r]$$$ with a loop and calculate their contribution to the query. Since $$$r - l \leq \lg n$$$, the loop still takes $$$O(\lg n)$$$ time. Any range update/query visits only two leaf nodes, so the time required to process the leaf nodes are also $$$O(\lg n)$$$.
This trick may not be that useful if we need only one segment tree in our problem. However, many problems require building multiple segment trees. For example, one common pattern is to create a segment tree for each bit position. Applying this trick to such problems can reduce the space complexity from $$$O(n \lg n)$$$ to $$$O(n)$$$.
Example Problem:
Great blog! Can you please share some problems where we construct a segment tree for each bit? Thank You!
242E - XOR on Segment
Nice, Thank You!
Thanks. I'll add this example to the blog!
If you implement seg trees with this midpoint https://codeforces.me/blog/entry/112755 then if you loop over the seg tree in BFS order, the lengths of node segments are non-increasing:
Meaning the set of nodes with length >=
X
will be some prefix of[1, 2 * n)
sketch of proof
induction assumption: the nodes on the i'th row have lengths:
2^(k+1)
, ...,2^(k+1)
,x
,2^k
, ...,2^k
with
2^k < x < 2^(k+1)
then going from the i'th row to the (i+1)'th row:
2^(k+1)
split into 2 childs of length2^k
2^k
split into 2 childs of length2^(k-1)
x
splits into childs of lengths:2^k
,y
; with2^(k-1) < y < 2^k
y
,2^(k-1)
; with2^(k-1) < y < 2^k
2^k
,2^(k-1)
thus the (i+1)'th row looks like:
2^k
, ...,2^k
,y
,2^(k-1)
, ...,2^(k-1)
with
2^(k-1) < y < 2^k
I still don't really get why space complexity is $$$O(nlogn)$$$ in general?
Maybe you mean $$$O(\frac{n}{\lg n})$$$? An intuitive way is to think it as building segment tree over blocks (that is, each leaf represent a block in segment tree), where one block have $$$O(\lg n)$$$ elements, then we have $$$O(\frac{n}{\lg n})$$$ blocks and building a segment tree over it require $$$O(\frac{n}{\lg n})$$$ nodes.
If you mean in the last paragraph, that's an example where you build a segment tree for every bit position. So a regular segment tree will use $$$O(log n \cdot n)$$$ nodes, while the one in the blog uses $$$O(log n \cdot \frac{n}{log n}) = O(n)$$$ nodes.
Nice. To me, it feels like a combination of segment tree and sqrt decomposition ideas.
There were DSA problems that I used something similar, like the SQRT decomposition idea, but only be able to get (or atleast to prove it at most) $$$O(N \log \log N)$$$ complexity.
It is nice to know more about Segment Tree, thanks.