There once was a man named Hugo Ryckeboer, and he came up with a neat way to do static fixed-length RMQ queries in $$$O(1)$$$.
After playing around with it, I was able to extend it to do range queries of arbitrary associative operations in $$$O(1)$$$, with $$$O(N \log N)$$$ precomputation. This is the same time complexity achieved by sparse table with idempotent operations, but my data structure reaches this bound for any associative operation.
EDIT: this data structure is called "disjoint sparse table"
Let's go through some background first.
Prefix sums
Take some array A and compute prefix sums in array P.
A = [ 1, 3, 6, 5, 2, 7, 1, 4] P = [ 1, 4, 10, 15, 17, 24, 25, 29]
Now we can compute range sum queries in $$$O(1)$$$ by substracting a prefix from another.
For example, query [2, 7)
is $$$25 - 4 = 21$$$.
Very neat, but we need an inverse operation (substracting, in this case).
Postfix-Prefix Sums
Instead, take the array A and compute a suffix sums array S for the first half and a prefix sums array P for the second
A = [ 1, 3, 6, 5, 2, 7, 1, 4] S,P = [15, 14, 11, 5] [2, 9, 10, 14]
Now we can compute queries by adding a suffix of the first half and a prefix of the second half.
For example, query [2, 7)
is $$$11 + 10 = 21$$$.
Very cool, but queries must cross the halfway point of the array.
Hugo Trick
Instead, take the array A, cut it up into intervals of length k, and compute the prefix and suffix sums of each one.
k = 3 A = [ 1, 3, 6, 5, 2, 7, 1, 4] P = [[ 1, 4, 10][ 5, 7, 14][ 1, 5]] S = [[10, 9, 6][14, 9, 7][ 5, 4]]
Now we can answer any query that crosses from one interval to the next by adding a suffix of the first interval with a prefix of the second
In particular, notice that we can compute all queries of length k using that idea.
For example, we can now answer [5, 8)
, whereas before we could not, as it doesn't cross the halfway point of the array.
And that is the entire trick that I was taught years ago. In my country we call it "DP Hugo" or "Hugo Trick".
Very rad, but it is limited to fixed-length queries.
Enhanced Hugo Trick
Let's add the capability for some different-length queries. Take the array A, and cut it up into overlapping intervals of length 2k, spaced by k positions. Then, compute prefix and suffix sums for each one.
k = 3 A = [ 1, 3, 6, 5, 2, 7, 1, 4] P = [[ 1, 4, 10, 15, 17, 24][ 1, 5]] [ 1, 4, 10][ 5, 7, 14, 15, 19] S = [[24, 23, 20, 14, 9, 7][ 5, 4]] [10, 9, 6][19, 14, 12, 5, 4]
Now, we can still answer any query that crosses from one interval to another, but that now includes all intervals of length between k and 2k (inclusive).
Very cute, but still limited to some specific lengths of queries.
Multilevel Hugo Trick (Disjoint Sparse Table)
Instead, create $$$\log N$$$ instances of the data structure described above, with k=1,2,4,8,16,..., then each one can handle queries lengths from one power of two to the next.
Now, given some query, you can take the logarithm of its length to know which instance will be able to handle it. The logarithm can be taken in $$$O(1)$$$ using some bit manipulation. Then, we can answer the queries in a single operation.
Conclusion
The examples use range-sum-queries, but this will actually work for any associative operation (I also use identity element to make implementation easier, but it is not strictly required).
Therefore, we can answer arbitrary range queries in $$$O(1)$$$. More precisely, we can answer range queries with exactly one call to the operation per query.
code:
#define OPER add // associative operation
#define IDEN 0 // identity element
int add(int a, int b) { return a + b; }
struct hugo_block {
vector<int> left, right;
int cut;
hugo_block(vector<int>& data, int k, int _cut) : cut{_cut} {
right.push_back(IDEN);
for (int i = 0; i < 2*k && cut+i < sz(data); ++i)
right.push_back(OPER(right.back(), data[cut+i]));
left.push_back(IDEN);
for (int i = 0; i < 2*k && cut-i-1 >= 0; ++i)
left.push_back(OPER(data[cut-i-1], left.back()));
}
int query(int l, int r) {
return OPER(left[cut-l], right[r-cut]);
}
};
struct hugo {
vector<hugo_block> blocks;
int k;
hugo(vector<int>& data, int _k) : k{_k} {
for (int b = 0; b*k <= sz(data); ++b) {
blocks.emplace_back(data, k, b*k);
}
}
int query(int l, int r) {
return blocks[r/k].query(l, r);
}
};
struct DisjointSparseTable {
vector<hugo> hugos;
int levels;
DisjointSparseTable(vector<int>& data, int _levels) : levels{_levels} {
for (int level = 0; level < levels; ++level) {
hugos.emplace_back(data, 1<<level);
}
}
int query(int l, int r) {
if (r == l) return IDEN;
int level = bit_width(unsigned(r-l))-1;
return hugos[level].query(l, r);
}
};
It is just a disjoint sparce table.
I figured it probably already existed, but I didn't know what it was called. Thank you!
it's well-know in china named cat tree
Awesome! Is there a link? It'd be good to know if there are more improvements that could be added!
You can read more about cat tree here.
Wow, I`ve never heard about cat tree, looks pretty interesting!
https://www.luogu.com.cn/blog/221955/mao-shu
sorry it's in chinese but you can use google translate
The article mentions a SQRT tree technique that can O(n log log n) — O(1)
Very cool! This answers how we can find the right interval in the divide and conquer approach (which uses half as much space as the version explained in my post) in O(1)!
For anyone curious: the observation is that you just need to answer LCA queries on a complete binary tree, and you can do that easily by taking the longest common prefix of the paths to the leafs that correspond to the start and end of the query range, which can be done in O(1) with some bit manipulation.
Auto comment: topic has been updated by estoy-re-sebado (previous revision, new revision, compare).
Here's a super-concise implementation that was a result of code-golfing a few years ago, and it's quite optimized for an implementation that is this small.
Auto comment: topic has been updated by estoy-re-sebado (previous revision, new revision, compare).