Motivation
Dynamic CHT with sets is hard to write. I haven't learned LiChao Tree, but I will share this technique anyways, as it can be applied to other data structures.
AbdelmagedNour explained this to me $$$8$$$ months ago, and it introduced a new way of thinking to me. I could not find any references to something similar on the internet, so I decided I'll give it my best shot here.
If this already has a name, please comment it.
Prerequisites:
There are no prerequisities; however, the problem I have posted for checking implementation uses Convex Hull Trick. Please read this blog if you don't know about it.
The problem:
Assume we have a data structure which can add an element in $$$O(A)$$$ and query in $$$O(Q)$$$; however, elements have to be added in monotonic order of some valid comparison function. If we receive the elements in monotonic order, we have no problems! But what if we don't? My claim is that it is possible to add all $$$N$$$ elements in amortized $$$O(AN \log N)$$$, and query for our property in $$$O(Q \log N)$$$. Usually we want to query the maximum element/minimum element, or any property in which we can easily merge $$$2$$$ possible answers.
A classical example of this is the convex hull trick data structure. This data structure allows us to insert lines in the form $$$y = mx + c$$$, and query the minimum/maximum value of $$$y$$$ for a given $$$x$$$. It works in amortized $$$O(n)$$$ where $$$n$$$ is the number of lines added. However, this assumes that lines are sorted in monotonic order of their slopes. What if they aren't?
Initially we create an array $$$blk$$$ of size $$$\lceil{\log N}\rceil$$$. $$$blk[i]$$$ will store a version of our data structure of size $$$2^i$$$, initially, $$$blk[i]$$$ is empty for all $$$i$$$.
Now, to add the first element, we just add it to $$$blk[0]$$$; but for the second element, we clear $$$blk[0]$$$, use merge sort to sort $$$blk[0]$$$ and the second element, and insert them into $$$blk[1]$$$. The third element is inserted into $$$blk[0]$$$, the fourth is merged using merge sort with $$$blk[0],blk[1]$$$, and they are all inserted into $$$blk[2]$$$, (and $$$blk[0],blk[1]$$$ are emptied).
A simple code demonstrating it would look something like this:
DS merge(DS& a, DS& b) {
// This function merges the 2 sorted data structures. This would depend on how the data structure works.
}
void insert(T element) {
DS current;
current.insert(element);
int index = 0;
while (blk[index].size > 0) {
current = merge(current, blk[index]), blk[index].clear();
index++;
}
blk[index] = current;
}
Why is the complexity amortized $$$O(AN \log N)$$$?
Notice that each element you add will go through at most $$$\log N$$$ places in the $$$blk$$$ array, and we add $$$N$$$ elements, each of which needs $$$O(A)$$$ to be inserted into the data structure, so it amortizes to $$$O(AN \log N)$$$.
Now, to query on this data structure, we choose the best possible answer throughout all non empty data structures in $$$blk$$$.
A simple code demonstrating it would like this:
T query(U input) {
T ans = identity;
for (auto& ds : blk) {
if (ds.empty()) continue;
ans = better(ans, ds.query(input)); // better returns whichever answer is better, or merges them, again, some things change depending on the data structure.
}
}
The complexity of this is clearly $$$O(QX \log N)$$$, where $$$O(X)$$$ is the complexity of the better
function.
As a practice for this technique, you can solve this problem.
If you know any more problems which apply this technique (with any data structure), please comment and I'll add it to the list :)
P.S. This is my first educational blog, if there's any mistake or any improvements I can make, please criticize me in the comments :)