This is my repo. There are many like it, but this one is mine. My repo is my best friend. It is my life. I must master it as I must master my life. Without me, my repo is useless. Without my repo, I am useless. I must maintain my repo true. I must commit faster than my collaborator who is trying to open issue. I must assign issue to him before he assigns issue to me. I will...
I will keep my repo clean and ready, even as I am clean and ready. We will become part of each other. We will...
Before Codeforces, I swear this creed. My repo and myself are the defenders of good solutions. We are the masters of our problemset. We are the saviors of my rating. So be it, until there is no WA, but AC. Amen.
Hi. I am sharing my algorithms repository to the community and calling you to test / enrich it. It is designed to be minimalistic and copy-pastable. I use it during live contests. Most of the code is not designed to be standalone — you need to copy and tweak it a bit.
https://github.com/sslotin/algo/blob/master/segtree_dynamic.cpp
Good luck with Memory Limits.
Apart from throwing off
lb
andrb
(and rewriting all functions with 2 more params and a bit more lines to recalculate them) and using int vector instead of pointers (which will be the same on 32-bit CF servers), how would you optimize it?You need to sacrifice a lot for memory. I only really struggled with tight ML when solving 2d gcd problem from some past ioi (memory optimization of which was the main point), other problemsetters are not sadistic enough in my experience.
Your
get_sum
creates 4 logX nodes in worst case, while should not do that at all.add
creates 2 times more nodes, than it may. Withlb, rb
and dynamic memory allocation, it's near 7x overhead for algo, that already have O(NlogN) space complexity. It's easy to make your code use 280mb memory (which is more than the usual ML on CF) with only 200K queries: https://www.ideone.com/idLyz6. And that's O(N logN) algo, which is usually may be used with much more queries.In my opinion, this is not a good idea to write a library code in such a non-optimal form.
Fair point. I guess I should add memory and performance-optimized versions of it. And maybe also some other data structures where I decided to use 2x more memory/time over writing 2x more code.
How does treap work without saving the priority of eacb node?
You can prove by induction that for each subtree (nodes from l to r), each node of it has equal probability of being the root on this segment (try it, really), so this treap is equivalent to treap with priorities. This trick is used for persistent treaps, where, using copies, you can construct a test case where a ton of nodes will eventually be with the same priority. Generating random numbers is way too long, I know. I should split it into persistent treap and usual one (**upd**: done).
The treap has random priority. Since one needs priority only in merge operation it's is convenient to generate some random value while merging (line 14). This means that every time you split and merge the treap changes its 'form'. The form itself is irrelevant as logarithmic depth invariant is the only thing we use random priorities for in most cases. It is also a time-memory trade off.
For dsu implementation: you don't need to store the size of tree in another array, just for every root save size in the parent array as a negative number.
Wow, that's a cool trick. Not going to use it for clarity of code, but thanks.
...alcotree?
Implementation of non-recursive bottom-top segment tree described by Al.Cash. His handle means "alcoholic" in Russian.
this was known way before his post. For example http://codeforces.me/blog/entry/1256
What is centroid?
Centroid Decomposition